I look at it a different way, spam and filters are locked in an evolutionary arms race and at the moment spammers have found an adaptation gives them an advantage. In due time the anti-spam filters will adapt as well. It has always been a difficult problem.
I suspect they spent several millions of dollars a year and at least 20-30 people if not more and I think you don't have any idea of how hard the problem is and how it's getting harder all the time.
It's going to get even harder as spammers use ChatGPT like tech to write individual spam messages for each person
Why do you keep telling others they have no idea how hard it is?
My point from the higher level comment is that the customer does not care how hard it is. If chatgpt makes it harder there is nothing stopping Google from innovating and improving their detections. The comments are calling out that they seem to be falling behind the curve as more dangerous phishing and spam/fraud emails slip through.
I for one have no sympathy. Google did the same as other giants and gobbled up as much tech talent as they could only to layoff thousands later. If you are telling me I need to feel empathetic for the company reaping trillions from invasive data harvesting and monopolizing the most used digital services on the planet, I shall play the smallest violin I can find.
> That brings me to something I really want in JS, actual unmutable values. If you use `const x = new SomeClass()`, you cannot reassign it, but you can change fields. The first time I encountered `const`, I thought it did the opposite. It would be cool if you could declare something (object, array) to be an immutable value.
That sounds like a fundamental mis-understanding. Variables do not hold objects, they hold references to objects.
const foo = {};
let bar = foo;
foo and bar hold references to the same object. They do not hold the object themselves. foo's reference can not be changed. It's const. bar's reference can. But the object is independent of both variables.
If you want the object itself to be unmodifiable there's Object.freeze.
const foo ...
makes foo const. If you wanted a shortcut for making the object constant (vs Object.freeze) it would be something like
let foo = new const SomeObject()
This doesn't exist but it makes more sense than believing that `const foo` some how makes the object constant. It only makes foo constant (the reference).
I don't want to freeze the object, I want to have a handle that doesn't allow me to modify the object. (Whether that would be at all possible in JS is another question.)
So
let foo = {field:1};
immutable let bar = foo;
bar.field = 2; // error
foo.field = 3; // ok
This is what I actually want when I think "const". I don't really care that you can reuse a variable, or re-seat a value. What I care about is that I recieve an object and sometimes want to modify it, and sometimes I want to make sure it stays the same. Maybe somebody else holds a reference and I don't want to surprise them.
(The inverse problem is when I have a function that takes something like a string or a number, and I want to change that from within the function. There is no way to pass a value type by reference. You have to encapsulate the value in an object and pass that. It would be cool if you could say something like `function double(ref x) { &x = x*2; }`.)
I agree that having a "can't modify this object" via this reference is useful and that JavaScript doesn't have it. TypeScript has it with `ReadOnly<T>`
It could be worse. You could be in python that has no const whatsoever :P
I also agree pass by reference is useful. JavaScript only has pass by value, similar to python.
You've confused method chaining and nesting. The proposal itself says that method chaining is easier to read, but limited in applicability, while it says deep nesting is hard to read. The argument against the proposal by the GP comments is that temporary variables make deep nesting easier to read and do it better than pipes would.
In your first find, yes, your modification helps me understand that code much more quickly. Especially since I haven't looked at this code in several years.
In that case, patches welcome!
In your second case, as the sibling comment explained, I'm not opposed to chaining in all cases. But if the pipe operator is being proposed to deal with this situation, I'm saying the juice isn't worth the squeeze. New syntax in a language needs to pull its weight. What is this syntax adding that wasn't possible before? In this case, a big part of the proposal's claim is that this sequential processing/chaining is common (otherwise, why do we care?), confusing (the nested case I agree is hard-to-read, and so would be reluctant to write), or tedious (because coming up with temporary variable names is ostensibly hard).
I'm arguing against that last case. It's not that hard, it frequently improves the code, and if you find cases where that's not true (as you did with the `moment` example above) the pipe operator doesn't offer any additional clarity.
Put another way, if the pipe operator existed in JS, would you write that moment example as this?
(1) named intermediate values are sometimes more readable ... though I have examples where it's very hard to come up with names and not sure it helped
(2) debugging is easier.
For (2) though, this IMO is a problem with the debugger. The debugger should allow stepping by statement/expression instead of only by line (or whatever it's currently doing). If the debugger stopped at each pipe and showed in values (2) would mostly be solved. I used a debugger that worked by statements instead of lines once 34 years ago. Sadly I haven't seen once since. It should be optional though as it's a tradeoff. Stepping through some code can get really tedious if there are lots of steps.
Intermediate variables also have the benefit to make not just the last value available in a debugger view, but also previous values (stored in separate variables). Of course, a debugger could remember the last few values you stepped through, but without being bound to named variables, presentation would be difficult.
It's hard to understand which statement the debugger has a break point set to when you can put many breakpoints on the same line
I have tools that can do it, but I'll still have a better time splitting out a variable for it, especially since what I really want is a log of all the intermediate values, so I can replicate what it's doing on paper
I had my car stolen from an apartment complex garage in like 1991. Sure it sucks to have my car stolen but the only irreplaceable thing was 36 mix tapes that were in the car. Guess that's an issue that no longer exists :P
I forgot where I heard this but "It's impossible to remember what it was like not to know something". This generally means all kinds of things are obvious to someone that knows a topic. They're so obvious they're invisible, forgotten about, and therefore you don't even think to teach them.
For me Spotlight fails constantly. I don't know what's tripping it up but programs I use often it randomly decides "today I'm not going to find those for you". I have it set to only show programs and nothing else so no idea why it can't do that simple task but whatever, several times a week it decides "not this time"
As for tiling, Mac does do somethings that Windows (IIRC) doesn't. One is, if you move a window or the edge of a window slowly it will snap at the border of another window. So you move fast to get it close then slow down and it will align nicely.
Honestly, I don't consider PNG a simple format. The CRC and the compression are non-trivial. If you're using a new language that doesn't have those features built in and/or you don't have a reasonable amount of programming experience then you're going to likely fail (or learn a ton). zlib is 23k lines. "simple" is not word I'd use to describe PNG
Simple formats are like certain forms of .TGA and .BMP. A simple header and then the pixel data. No CRCs, no compression. Done. You can write an entire reader in 20-30 lines of code and a writer in other 20-30 lines of code as well. Both of those formats have options that can probably make them more work but if you're storing 24bit "True color" or 32 bit "true color + alpha" then they are way easier formats.
Of course they're not common formats so you're stuck with complex formats like PNG
For audio, my all-time favorite format to work with is raw PCM.
One time, I had to split a bunch of WAV files at precise intervals. I first tried ffmpeg, but its seeking algorithm was nowhere near accurate enough. I finally wrote a bash script that did the splitting much more accurately. All I had to do to find the byte offset from a timestamp in an raw PCM audio file is multiply the timestamp (in seconds) by the sample rate (in Hz) by the bit depth (in bytes) by the number of channels. The offset was then rounded up to the nearest multiple of the bit depth (in bytes) times the number of channels (this avoids inversions of the stereo channels at cut points).
Once I had the byte offset, I could use the head and tail commands to manipulate the audio streams to get perfectly cut audio files. I had to admire the simplicity of dealing with raw data.
Smart file systems should offer a way to access raw datastreams and elements within more complex filetypes. e.g. one could call fopen("./my_sound.wav/pcm_data") and not have to bother with the header. This would blur the distinction between file and directory, requiring new semantics.
PNG is not a format for uncompressed or RLE "hello worlds". It's a format designed for the Web, so it has to have a decent compression level. Off-the-shelf DEFLATE implementations were easily available since its inception.
I think it is pretty pragmatic and relatively simple, even though in hindsight some features were unnecessary. The CRC was originally a big feature, because back then filesystems didn't have checksums, people used unreliable disks, and FTPs with automatic DOS/Unix/Mac line ending conversions were mangling files.
PNG could be simpler now if it didn't support 1/2/4-bit depths, keyed 1-bit alpha for opaque modes, or interlacing. But these features were needed to compete with GIF on low-memory machines and slow modems.
Today, latest image formats also do this competition of ticking every checkbox to even worse degree by adding animation that is worse than any video format in the last 20 years, support all the obsolete analog video color spaces, redundant ICC color profiles alongside better built-in color spaces, etc. By modern standards PNG is super simple.
As for color spaces that is a case where things get worse before they get better. In the 1990s I remember the horror of making images for the web with Photoshop because inevitably Photoshop would try some kind of color correction that would have been appropriate for print output but it ensured that the colors were wrong every time on the screen.
Today I am seeing my high color gamut screen as a problem rather than a solution because I like making red-cyan anaglyph images and found out that Windows makes (16,176,16) when I asked for (0,180,0) because it wants to save my eyes from the laser pointer green of the monitor by desaturating it to something that looks like sRGB green to my eyes, but looking through 3d glasses it means the right channel blends into the left channel. To get the level of control I need for this application it turns out I need to make both sRGB and high gamut images and display the right one... Which is a product of the complexity of display technology and how it gets exposed to developers.
> There was talk about upgrading PNG to support the equivalent of animated GIFs but it never really happened because of complexity
This was mostly due to overengineering on the part of the PNG committee. Why stop at animated PNGs, when we could support sound and interactivity! MNG is not a simple format, and the spec has MNG-LC ("low complexity") and MNG-VLC ("very low complexity") subsets, because the whole thing is too complex. Did you know you can embed JPEGs in MNGs? That it has synchronization points for sound, even though sound is still "coming at a later date"? That it allows pasting other images into the movie at arbitrary 2D transforms?
MNG's complexity is self-inflicted, because they second-system effect'd their way into features nobody wanted.
APNG, by contrast, is a series of PNG chunks with a couple extra fields on top for timing and control information.
> Today, latest image formats also do this competition of ticking every checkbox to even worse degree by adding animation that is worse than any video format in the last 20 years,
yet just seeking in any random vpX / h26x / ... format is A PITA compared to trusty old gifs. it's simple, if you cannot display any random frame N in any random order in constant (and very close to zero) time it's not a good animation format
You can't do that for GIF. Each frame can be composited on top of the last frame (ie. no disposal; this allows storing only the part that changed), so to seek to a random frame you may need to replay the whole GIF from the start.
The reason you can seek to any frame is GIFs tend to be small, so your browser caches all the frames in memory.
Simple formats are PPM / Netpbm; they’re ASCII text with an identifier line (“P1” for mono, “P2” for grayscale or “P3” for colour), a width and height in pixels (e.g. 320 200), then a stream of numbers for pixel values. Line breaks optional. Almost any language that can count and print can make them, your can write them from APL if you want
As ASCII they can pass through email and UUNET and clipboards without BASE64 or equivalent. With flexible line breaks they can even be laid out so the monochrome ones look like the image they describe in a text editor.
The Netbpm format is amazing if you quickly want to try something out and need to generate an image of some sorts. The P6 binary format is even simpler, you write the header followed by a raw pixel data blob, e.g.:
I use this all the time. I love that it's simple enough that I can type something like those two lines off the top of my head at this point. And as an alternative to that fwrite(), another common pattern that I use is:
for (int y = 0; y < HEIGHT; ++y)
for (int x = 0; x < WIDTH; ++x)
{
// ... compute r, g, and b one pixel at a time
printf("%c%c%c", r, g, b);
}
I also find ImageMagick very convenient for working with the format when my program writes a PPM to stdout:
Yea I know, that's not a complete example, endian issues, error checking.
Reading a PPM file is only simple if you already have something to read buffered strings and parse numbers etc... And it's slow and large, especially for todays files.
It would be nice if the CRCs and compression were optional features, but perversely that would increase the overall complexity of the format. Having compression makes it more useful on the web, which is why we're still using it today (most browsers do support BMP, but nobody uses it)
The fun thing about DEFLATE is that compression is actually optional, since it supports a non-compressed block type, and you can generate a valid stream as a one-liner* (with maybe a couple of extra lines to implement the adler32 checksum which is part of zlib)
The CRCs are entirely dead weight today, but in general I'd say PNG was right in the sweet-spot of simplicity versus practical utility (and yes, you could do better with a clean-sheet design today, but convincing other people to use it would be a challenge).
Edit 2: Actual zlib deflate oneliner, just for fun:
deflate=lambda d:b"\x78\x01"+b"".join(bytes([(i+0x8000)>=len(d)])+len(d[i:i+0x8000]).to_bytes(2,"little")+(len(d[i:i+0x8000])^0xffff).to_bytes(2,"little")+d[i:i+0x8000]for i in range(0,len(d),0x8000))+(((sum(d)+1)%65521)|(((len(d)+sum((len(d)-i)*c for i,c in enumerate(d)))%65521)<<16)).to_bytes(4,"big")
The usual answer is that "checksumming should be part of the FS layer".
My usual retort to such an assertion is that filesystem checksums won't save you when the data given to the FS layer is already corrupted, due to bit flips in the writer process's memory. I personally have encountered data loss due to faulty RAM (admittedly non-ECC, thanks to Intel) when copying large amounts of data from one machine to another. You need end-to-end integrity checks. Period.
I agree with the "usual" answer, or more generally, "the layer above". We shouldn't expect every file format to roll its own error detection.
If you truly care about detecting bit-flips in a writer process's memory, that's a very niche use-case - and maybe you should wrap your files in PAR2 (or even just a .zip in store mode!).
99% of in-the-wild PNGs are checksummed or cryptographically signed at a layer above the file format (e.g. as part of a signed software package, or served over SSL).
Edit: Furthermore, the PNG image data is already checksummed as part of zlib (with the slightly weaker adler32 checksum), so the second layer of checksumming is mostly redundant.
> We shouldn't expect every file format to roll its own error detection.
On the other hand, why not? If you are dealing with files that are usually 200kB+, putting 4 or 16 bytes towards a checksum is not a big deal and can help in some unusual situations. Even if the decoder ignores it for speed, the cost is very low.
The space cost is negligible, but the time cost for the encoder is real. Since most decoders do verify checksums, you can't just skip it. Take fpng[1] as an example, which tries to push the boundaries of PNG encode speed.
> The above benchmarks were made before SSE adler32/crc32 functions were added to the encoder. With 24bpp images and MSVC2022 the encoder is now around 15% faster.
I can't see the total percentage cost of checksums mentioned anywhere on the page, but we can infer that it's at least 15% of the overall CPU time, on platforms without accelerated checksum implementations.
I didn't infer 15% from the way it was written there.
But most platforms these days have some form of CRC32 "acceleration". Adler32 is easy to compute so I'm even less concerned there.
Does 15% more time to encode matter? How much time is spent encoding files vs decoding? That is probably still an negligible amount of compute, out of the total compute spent on PNGs.
Your specific number seem to come from an (old version of) an encoder that has super-optimized encode and not (yet) optimized CRC.
CRC can't save you from faulty RAM. It can save you from bitrot in data at rest and from transmission errors. If you have faulty RAM, all bets are off. The data could be corrupted after it's been processed by the CPU (to compute the CRC) and before it's been sent to the storage device.
Arguably, the real reason CRC is useless is that most people don't care about the data integrity of their PNGs. Those who do care probably already have a better system of error detection, or maybe even correction.
deflate=lambda d:b"\x78\x01"+b"".join(bytes([(i+0x8000)>=len(d)])+len(d[i:i+0x8000]).to_bytes(2,"little")+(len(d[i:i+0x8000])^0xffff).to_bytes(2,"little")+d[i:i+0x8000]for i in range(0,len(d),0x8000))+(((sum(d)+1)%65521)|(((sum((len(d)-i)*c+1 for i,c in enumerate(d)))%65521)<<16)).to_bytes(4,"big")
Agree. Programming video games in the early 2000s, TGA was my goto format. Dead simple to parse and upload to OpenGL, support for transparency, true colors, all boxes ticked.
I once wrote a PCX decoder in Pascal outputting VGA w/mode 13. The cool part for me was it had run length encoding, which I was able to figure out trivially just reading the spec. May not have been the most efficient, but way easier than trying to figure out GIF!
I really like QOI (The Quite OK Image format). It achieves similar compression to PNG, but it's ridiculously easy to implement (the entire spec fits on a single page), and its encoding and decoding times are many times faster than PNG.
I'm also a big fan of QOI as a simple imagine format.
Yes, it's not as good as PNG (as the sibling comments point out), but I view it more as an alternative to PPM (and maybe a BMP subset), as something that I can semi-quickly write an encoder/decoder if needed.
IMO, PNG is in a completely different level. Case in point, in the linked article the author mentions to not worry about the CRC implementation and "just use a lib"... If that's the case, why not just use a PNG lib?
It depends mostly on the year of birth of the beholder.
I imagine in a couple of decades that "built-in features" of a programming environment will include Bayesian inference, GPT-like frameworks and graph databases, just as now Python, Ruby, Go, etc. include zlib by default, and Python even includes SQLite by default.
Some languages will. However there will also be a constant resurgence brand new of "simple" languages without all of that cruft that "you don't need" (read whoever came up with the language doesn't need).
Another relatively simple format, that is apparently additionally superior to PNG in terms of compression and speed, is the Quite OK Image format (QOI):
It's dead simple to emit. The P6 binary version is just a short header, followed by RGB pixel data, one byte per channel.
If you don't have a PNG encoder handy and need a quick "I just need to dump this image to disk to view it" for debugging, PPM is a great format due to how trivial it is. But it doesn't fit a lot of use cases (e.g., files are huge, because no compression).
TIFF, on the other hand is a "highest common denominator, lowest common denominator, what the hell, let's just throw every denominator -including uncommon ones- in there" format.
For example, you can have images with four (or more) color channels, of different bit lengths, and different gammas and image characteristics (I actually saw these, in early medical imaging). You can have multiple compression schemes, tile-based, or strip-based layout, etc. A lot of what informed early TIFF, was drum scanners and frame captures.
Writing TIFF: Easy.
Reading TIFF: Not so easy. We would usually "cop out," and restrict to just the image formats our stuff wrote.
I would say Netpbm is similar. Writing it is easy. … reading it … not so much.
PPM is just one format; Netpbm is like a whole family. The "P6" is sort of the identifier that we're using that format — the other identifiers can identify other formats, like greyscale, or monochrome, or the pixel data is encoded in ASCII. The header is in text and permits more flexibility than it probably should. Channels greater than a byte are supported.
Writing a parser for the whole lot would be more complex. (I think TIFF would still beat it, though.) Just dumping RGB? Easy.
Not sure how commonly known it is, but TIFF's extended cousin, GeoTIFF, is a standard for GIS data because of the flexibility you describe, especially the (almost) limitless number of channels and the different data format in channels.
At that point you're not dealing with 'images', but instead raster datasets: gridded data. So, you can combine byte t/f results with int16 classification codes, with float32 elevation data, with 4 channels of RGB+Near Infrared imagery data in uint32, plus some arbitrary number of gridded satellite data sources.
That can all be given lossless compression and assigned geotagging headers, and the format itself is (afaik) essentially open.
I don't know, because zlib makes concessions for every imaginable platform, has special optimizations for them, plus is in C which isn't particularly logic-dense.
> The CRC and the compression are non-trivial.
CRC is a table and 5 lines of code. That's trivial.
>zlib is 23k lines
It's not needed to make a PNG reader/writer. zlib is massive overkill for only making a PNG reader or writer. Here's a tiny deflate/inflate code [2] under 1k lines (and could be much smaller if needed).
stb[0] has single headers of ~7k lines total including all of the formats PNG, JPG, BMP,. PSD, GIF, HDR, and PIC. Here's [1] a 3k lines single file PNG version with tons if #ifdefs for all sorts of platforms. Removing those and I'd not be surprised if you could not do it in ~1k lines (which I'd consider quite simple compared to most of todays' media formats).
>Of course they're not common formats so you're stuck with complex formats like PNG
BMP is super common and easy to use anywhere.
I use flat image files all the time for quick and dirty stuff. They quickly saturate disk speeds and networking speeds (say recording a few decent speed cameras), and I've found PNG compression to alleviate those saturate CPU speeds (some libs are super slow, some are vastly faster). I've many times made custom compression formats to balance these for high performance tools when neither things like BMPs or things like PNG would suffice.
While PNG is definitely not as simple as TGA, I'd say it's "simple" in that it's spec is mostly unambiguous and implementing it is straight forward. For its relative simplicity it's very capable and works in a variety of situations.
One nice aspect of PNG is it gives a reader a bunch of data to validate the file before it even starts decoding image data. For instance a decoder can check for the magic bytes, the IHDR, and then the IEND chunk and reasonably guess the file is trying to be a PNG. The chunks also give you some metadata about the chunk to validate those before you even start decoding. There's a lot of chances to bail early on a corrupt file and avoid decode errors or exploits.
A format like TGA with a simplistic header and a blob of bytes is hard to try validating before you start decoding. A file extension or a MIME header don't tell you what the bytes actually are, only what some external system thinks they are.
The zlib format includes uncompressed* chunks, and CRC is only non-trivial if you're also trying to do it quickly, so a faux-zlib can be much, much smaller.
(I don't recall if I've done this with PNG specifically, but consider suitably crafted palettes for byte-per-pixel writing: quick-n-dirty image writers need not be much more complex than they would've been for netpbm)
* exercise: why is this true of any reasonable compression scheme?
I've done this. For a project where I didn't want any external dependencies, I wrote an uncompressed PNG writer for RGBA8 images in a single function. It's just over 90 lines of C++:
> why is this true of any reasonable compression scheme?
Any? I wouldn't say that. If you took LZ4 and made it even simpler by removing uncompressed chunks, you would only have half a percent of overhead on random data. A thousandth of a percent if you tweaked how it represents large numbers.
TIL. IIUC, LZ4 doesn't care about the compression ratio (to which you are correct I had been alluding) but does strongly care about guaranteeing a block maximum size. (so still the same kind of concern, just on an absolute and not a relative basis)
BMP is really great, the whole format is described on wikipedia with enough detail to code it yourself in literally 10 minutes, and the 'hardest' part of creating (or parsing) a bmp is counting the bytes to pad the data correctly, and remembering where [0,0] is :)
But there are lots of BMP versions - wiki says "Many different versions of some of these structures can appear in the file, due to the long evolution of this file format."
If you think PNG is complex have a gander at webp. That plane crash is a single frame of vp8 video. Outside of a Rube Goldberg web browser the format is useless.
I don't know about other platforms but .webp is very well supported on Linux. I've got .webp files showing up just fine from Emacs and picture viewers and ImageMagick's tools do support .webp just fine.
Lossless WEBP is smaller than optimized/crushed PNG files.
And I'd say that's quite a feat, which may explain the complexity of the format.
So WEBP may be complicated but if my OS supports it by default, where's the problem? It's not as if I needed to write another encoder/decoder myself.
If you want to handle the format by yourself from scratch it's super complex indeed, but OTOH everyone just uses libwebp which has a very simple API, especially compared to something like libpng. I have added WebP support via libwebp into Allegro 5 myself and didn't even have to stop to think, it was as straightforward as it gets - and implementing animated WebPs wasn't hard either.
WebP is useful for lossless image storage for games/game engines, it takes roughly 80% of the time to load/decode vs the same image stored as a png, and is usually significantly (multiple megabytes) smaller for large textures. That stuff doesn't matter too much in a web browser, but in a game where you have potentially hundreds of these images being loaded and unloaded dynamically and every millisecond counts, it's worthwhile.
Erm, aren't both WebP and PNG rather useless for games? How do you convert those formats on the fly into one of the hardware-compressed texture formats consumed by the GPU (like BCx, ETC or ASTC)? If you're decoding PNG or WebP to one of the linear texture formats, you're wasting a ton of GPU memory and texture sampling bandwidth.
Hardware compressed texture formats introduce compression artifacts, which is fine for some art styles or PBR maps that don't need to be super accurate, but for some styles (such as pixel art or "clean" non-pixel styles, in both 2d and 3d) lossless compression is preferred, and yeah they're just decoded into bitmap data on the fly. Whether it wastes memory or not is subjective and dependent on use case. Yeah if you're pushing 4k PBR maps for terrain to the gpu using lossless formats for storage that's not smart, but you could argue that for many textures, using VRAM formats wastes disk/download space vs lossless (especially on mobile devices or webgl/wasm where space matters more). If disk space/download size isnt a concern then uncompressed vram formats can work for smaller textures. Though there is an initial decoding/upload cost to compressed lossless images, and they're not optimised well for streaming, at least with pixel art that's not a huge concern as textures tend to have small dimensions, though a spritesheet in a VRAM format can quickly baloon to ridiculous sizes for what is otherwise low resolution artwork. Of all the open formats that support lossless compression, are easy to link against, with wide platform support, webp is good, and smaller/faster than png for basically all images. Basis universal is a decent solution to the disk size problem of traditional VRAM formats, but it still isn't lossless (afaik?). Oodle is new to me, it looks good, appears to solve all of the above if the blurb is to be believed; it's a shame it's proprietary. I'd use it right away if it was FOSS.
IME most 2D games use uncompressed textures. Looking perfect matters less if you're going to stretch it across a 3D tri and do a bunch of fancy lighting.
One of the annoyances of TGA format is that they have no signature at beginning of the file. The signature is at bottom. This allows you to craft a TGA file that could be misidentified.
1000 square feet is only small in the USA. 1000 sqft is the average size of a home in Japan. 50% are smaller. There's plenty of creative ways to use 1000 sqft