Hacker Newsnew | past | comments | ask | show | jobs | submit | Daemon404's commentslogin

(long time FFmpeg dev here)

You are being downvoted, but you are entirely correct. This is also explicitly not allowed in FFmpeg, but this was pushed after many months, with no heads up on the list, no final review sign off, and with some developers expressing (and continuing to express) reservations about its quality on the list and IRC.


That's really unfortunate to hear. I'm a huge fan of Webrtc and Pion, and was very excited to get some ffmpeg integration -- hopefully some of the quality issues will be ironed out before the next ffmpeg release


There's quite some time until the next release, I believe, so it should be.

The biggest thing missing right now is NACK support, and one of the authors has said they intend to do this (along with fixing old OpenSSL version support, and supporting other libraries). Until that is done, it isn't really "prod ready", so to speak.

For some context, there has been a history of half-supported things being pushed to FFmpeg by companies or people who just need some subset of $thing, in the past, and vendors using that to sell their products with "FFmpeg isn't good enough" marketing, while the feature is either brought up to standard, or in some cases, removed, as the original authors vanish, so it's perhaps a touchy subject for us :) (and why my post was perhaps unnecessarily grumpy).

As for the git / premature push stuff, I strongly believe it is a knock-on effect of mailing list based development - the team working on this support did it elsewhere, and had a designated person send it to the list, meaning every bit of communication is garbled. But that is a whole different can of worms :D.


You have that backwards - it must be dynamically linked. Static linking without providing your source would violate the LGPL.


Can you drill down a bit more into this? I would consider static linking to be including unmodified ffmpeg with my application bundle and calling it from my code (either as a pre-built binary from ffmpeg official or compiled by us for whatever reason, and called either via a code interface or from a child process using a command line interface). Seems bsenftner's comment roughly confirms this, tho their original comment does make the distinction between the two modes.

What's someone to do?


Static linking means combining compiled object files (e.g. your program and ffmpeg) into a single executable. Loading a .so or .dll file at runtime would be dynamic linking. Invoking through a child process is not linking at all.

Basically you must allow the user to swap out the ffmpeg portion with their own version. So you can dynamically link with a .dll/.so, which the user can replace, and you can invoke a CLI command, which the user can replace. Any modifications you make to the ffmpeg code itself must be provided.


It is widely known and accepted that you need to dynamically link to satisfy the LGPL (you can static link if you are willing to provide your object files on request). There is a tl;dr here that isn't bad: https://fossa.com/blog/open-source-software-licenses-101-lgp...

But, speciically the bit in the LGPL that matters, is secton 5: https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html#S... - particularily paragraph 2.

As always, IANAL, but I also have worked with a lot of FOSS via lawyers.

Also, this is and always has been the view of upstream FFmpeg. (Source: I work on upstream FFmpeg.)


If one statically links ffmpeg into a larger proprietary application, the only source files one needs to supply are your ffmpeg sources, modified or not. The rest of the application's source does not have to be released. In my (now ex) employer's case, only the low level av_read_frame() function was modified. The entire ffmpeg version used, plus a notice about that being the only modification, is in the software as well as the employer's web site in multiple places. They're a US DOD contractor, so their legal team is pretty serious.


Section 6:

“Also, you must do one of these things:

a) […] if the work is an executable linked with the Library, [accompany the work] with the complete machine-readable ‘work that uses the Library’, as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. […]

b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with.

[…]”


ffmpeg's own https://www.ffmpeg.org/legal.html seems like a good reference IMO.

To @keepamovin, "called either via a code interface or from a child process using a command line interface" -- regardless of the license terms, fork()/exec()'d programs "could never" impose any licensing requirements on the parent because the resulting interaction among parent/child is not a derived work. As usual: IANAL, this probably pertains more to USC than other jurisdictions.


> What's someone to do?

Release your code as GPL


ffmpeg is LGPL so no need to release your code under GPL if you follow the LGPL licensing guidelines.


This is a constant issue for me (Chrome on Android) on GMail's Web UI (yes, I insist on using the Web UI...). So many emails are entirely unreadable since they go right off the right side, and you can't zoom.


Pretty much everyone (including my $dayjob) seems to do some webm parsing, rewiting, and/or remuxing in JS, or on the backend post-record. There are half a dozen ad hoc webm parsers floating around GitHub for this reason, and a few more minimal WebM or ISOBMFF muxers.

And half the tools don't work on Safari, which makes ISOBMFF in their MediaRecorder implementation.

It really seems to me, as a user (not invloved in the standards) like it was some intenal Chrome functionality that got an API slapped on top and made into a spec. Nothing seems well designed, or... designed at all, tbh.

(Can you tell I've had to work with MediaRecorder? When I read 'The skater punk’s guide to MediaRecorder' I thought it would be one sentence: 'Put it in the trash.')

Apologies for the salt.


So funny to read this for me. I made a pretty good living for myself leveraging MediaRecorder. And "pretty good" is an understatement.

Someone's garbage is someone'else gold....


I can only speak for myself here, but why would step 1 be "become a user"? I ask because I don't fully grok why I would want to contribute to a project I don't use (either in my personal time, or work time)?


It’s worthwhile to be explicit there. For example, a person can be a heavy user of Pandas library but rarely use underlying numpy library directly. There could be motivation to contribute to numpy with expectation that Pandas will benefit too.

Based on the article, this user should start learning to use numpy directly, and I can agree with that.


why I would want to contribute to a project I don't use

Sometimes I might have an academic or research interest in a field I'm not actually working in or I read about a topic that sounds interesting. Then contributing to a project working with that topic or field might seem like a good way to learn and get more into that topic.


The simple answer would be you don't have to. The underlying assumption is someone already have personal or professional interest in those projects.


The demo seems to never load for me on Android Chrome? It sits with an off-center spinner forever.


This is some weird bug. I tested it in chrome for android and it doesn't load successfully, but using firefox for android works fine. In the desktop both firefox and chrome works properly. It seems chrome is not consistent in their implementation on other platforms.


Chrome does seem to save the JPEG version on some WebP or AVIF URLs where there is also a JPEG version available too, although it seems to be 'clever' about it rather than explicitly offering the option, which can make saving the actual WebP or AVIF mildly annoying.


I've noticed this too but I couldn't determine any clear patterns. It's also very confusing when the file ending in the URL does not match what I download (that's why an explicit "save as" might be better than an implicit auto conversion). I think I also had situations where it did the opposite, download the webp even though the URL had jpeg in the name. I assumed that it's the webserver serving the image being too clever.


It's been my experience that part of the reason browsers get stuff wrong even when it's clear what to do, is that entirely different teams work on parts of the browser that all need a poper color pipeline, and while one team will learn and do it correctly, the new team working on the diffrent part has to go through the same process again.

You can see this, for example in Firefox's image pipeline which seems to assume everything is 601 (because JPEG), and this means 709 AVIFs won't render correctly (and are thus off by default currently).

Or in Chrome, the MediaRecorder API will create HD H.264 streams that are untagged, and are 601. Which Chrome then assumes is 709 based off the res.

Or, also in Chrome, the WebCodecs team not having talked to the Media Team, seemingly (?) before starting work on what kind of buffer gets returned to the user, and what the semnatics of its color are. (I think this is resolved now, though - this was a year or two back when they engaged with VideoLAN over this API)


Honestly a big reason is that there simply aren't tests.


I miss the days when browsers took pride in announcing[1] their new version passed Acid2/Acid3. Sure, the tests had issues, but they were self contained and anyone could see which browsers mostly passed and which were missing half of the features. Unfortunately, the modern trend towards monopoly is increasingly hostile to ideas like "open standards" and "interoperability", so I doubt we'll see this kind of test used again anytime soon.

[1] https://en.wikipedia.org/wiki/Acid2#Timeline_of_passing_appl...


The modern equivalent is the Web Platform Tests[1], which are far more extensive than the Acid tests were. Browser conformance is tracked continuously[2].

[1] https://github.com/web-platform-tests/wpt [2] https://wpt.fyi/


The WPT is the exact opposite of what I was talking about. The interesting thing about the Acid tests isn't their utility as a conformance test. Sure, that's what the were, but even at the time the problems with the tests were well known. The features they tested were incomplete and somewhat arbitrary. Most browsers didn't even bolter trying to get the last 3 points on Acid3, which tested SVG fonts. I'm sure WPT is far more useful from a technical perspective.

Unfortunately, WPT is missing the actual feature that made the Acid tests interesting: a design that could be used and understand by anyone. Most people are not going to go do the incredibly complex work required to actually run the WPT tests (which apparently involves its own command line utility to manage the process, requires knowledge of python/pip/virtualenv, and understanding platform-specific documentation for both setting up and running the tests.

For Acid2/Acid3, anybody could simply load the test URL to run the tests themselves, on their own browser and OS, and see the results first-hand. It doesn't matter that most people didn't know the various CSS/Javascript/etc features being tested. Seeing that your browser completely failed Acid3 was obvious[1]. It was exciting to see for yourself if the latest browser update scored higher on Acid3.

[1] https://en.wikipedia.org/wiki/File:Acid3ie8rc1.png


If you want to run a specific wpt, you just load the URL. See https://wpt.live/ (which is linked to from the wpt.fyi bits).

But yes, if you want to run the full test suite and get overall results there's more work to do.


I probably shouldn't nitpick here, but Chrome's WebCodecs team is a subset of the Media team. Color simply wasn't the first priority to implement.

What has changed is that more functionality got pulled into WebCodecs as it became clear that existing web platform objects (eg. ImageBitmap) were not an ideal fit.


This is one place the ISOBMFF spec really screwed up, in my opinion. They should have included semantics for the colr box, i.e. which takes precedence.

Intead we get: "Colour information may be supplied in one or more ColourInformationBoxes placed in a VisualSampleEntry. These should be placed in order in the sample entry starting with the most accurate (and potentially the most difficult to process), in progression to the least. These are advisory and concern rendering and colour conversion, and there is no normative behaviour associated with them; a reader may choose to use the most suitable."


> discontinuing RD in to the browser space

Not just the browser space; they axed e.g. all their people involved in AOM and video R&D, despite being a founding member.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: