Google basically decides what to implement, what not to implement, what to deprecate.
For example the current controversial change: Google says "slow web is bad" and deprecates an API that allowed extensions to do almost unlimited work for any web page. Oops. Adblockers depended on that and did want to do almost unlimited work for any web page, and that wasn't a bug, but the API gets dropped, AIUI because Google wants performance. The reason doesn't matter though. Google gets to set the agenda, no matter what its reasons are.
I'm not sure if that's what the person I was asking meant by "general agenda", considering the change you're talking about (Manifest v3) doesn't even affect Brave[0], or any other Chromium based browser that simply doesn't wish to implement that specific change. Chromium based browsers change their code all the time, and decide whether or not incoming changes will be fully adopted by their browser themselves.
If I understand your comment right, you are saying that the opiod crisis shows that people do choose drugs if they can and if there was univ. healthcare, then the system would go down under the costs.
The problem with this, as I see it: A good chunk of people addicted in this opiod crisis didn't choose to get addicted. They were all too readily prescribed hard painkillers for ailments that did by no means require such a treatment (e.g. backpain of various degrees). This is fentanyl as a gateway drug, so to speak. Those people would not have been addicts otherwise. But they got hooked on the medication and when their prescriptions comes to an end, then they would seek a replacement.
That is why doctors and pharmaceutical companies are getting sued - and are losing.
Further: If psychedelic drugs are indeed this helpful with issues like smoking and depression, like recent research suggests, then controlled administration would surely be a great relief to an overwhelmed healthcare system: saving e.g. on expensive anti-depression drugs or on cancer treatments, where the cancer was caused by smoking.
>I’m an adult, I can either make those decisions for myself or hire experts for consultation to help me make the decision.
The opioid crisis is evidence that this can lead to trouble. There's an easy counter argument that it only causes trouble for the people who choose to start using those substances, and so it's still the right thing to allow someone to do. I tried to present an externality imposed to counter that counter, and healthcare is an easy example.
This is really cool. I am working in a related area and I think most of us have assumed that on average, the information rate is 'about the same' for the languages across the world. So it's exciting to see that their results confirm this assumption.
Two qualifying remarks.
1) The 'about the same' is important. Even in their data, there is still quite some variance. They found an average of 39bits, with a stdev of 5. That means that about 1/3 of the data falls outside of the range of 34-44bits.
2) Which brings me to the the uniform information density (UID) hypothesis. According to the UID, the language signal should be pretty smooth wrt how information is spread across it. For many years, the UID was thought to be pretty absolute: Even across a unit like a sentence, it was thought that information will spread pretty evenly. Now, there is an increasing amount of research that shows that esp. in spontaneous spoken language, there is a lot more variance within in the signal, with considerable peaks and troughs spread across longer sequences.
Why did everyone assume it would be the same on average? This seems weird to me.
Also, can you explain more about how the information density was calculated? Anything at the bit level seems crazy small to me. Words convey a lot of information. They cause your brain to create images, sounds, emotions, smells, etc. I guess we're calling language a compression of that? But even still, bits seems small.
> Why did everyone assume it would be the same on average? This seems weird to me.
(see edit below; but i leave this up; it might be interesting, also)
you mean that even for smaller sequences, the UID holds, right? the assumption was that even for a single sentence, there are a lot of ways to reduce or increase information density so that you get a smoother signal. e.g.: "It is clear that we have to help them to move on.", you could contract it to "it's clear we gotta help them move on" and contract it even further in the actual speech signal ('help'em'). or you could stretch it: "it is clear to us that we definitely have to help them in some way to move on", or alike. the assumption was that such increases / decreases would even be done to 'iron out' the very local peaks and troughs, particularly in speech.
bits: yeah, that took me a while to get used to, as well. the authors used (conditional) entropy as a way to measure information density (which is a good measure in this instance imv). and bits is just per definition the unit that comes out of information theoretical entropy: https://en.wikipedia.org/wiki/Entropy_(information_theory) . btw: while technically possible, i don't think that the comparison in the summary article between 39 bits in language and a xy bit modem is a helpful comparison. bits in the context of entropy are all about occurence and expectation in a given context. bits of a modem/in CS, they represent a low level information content for which we do not check context and expectation.
edit: ah, i realise you are asking why most in our community assumed that this universal rate applied across languages, right?
i guess the intuition was that all of us humans, no matter what language we speak, use the speech signal to transmit and receive information and that all of us have the same cognitive abilities. so the rate at which we convey information should be about the same. sure, there are probably differences according to some factors (spoken vs written language, differences in knowledge between speakers, etc.). but when the only factor that differs is English vs Hausa, esp. in spontaneous spoken language, then the information rate should be about the same.
> esp. in spontaneous spoken language, then the information rate should be about the same.
This is entirely non-intuitive to me. I would think with language evolving that some would be faster than others. If language starts as conveying extremely simple thoughts then it should take longer to convey certain things. I would then assume that as the language develops it gets better at conveying ideas. I would think that thoughts could go much faster than how we process it with language. Like I have constant thoughts that are really fast and can be complex. There's no internal dialogue there. But when I think with an internal dialogue it is much slower.
After a few cocktails, once or twice, I've wondered with friends whether some "fuzzy" information rate constant might be a reference by which our brain understands the passage of time. In other words: if there is a fundamental processing rate of x/time, then theoretically, wouldn't our brains subconsciously use that for all kinds of neat reasons?
And the rate wouldn't have to be the exact same value for each individual, so long as the brain can attune its specific value to other reference points to time in nature.
I find this approach strangely condescending. For example the author says:
> Understanding the value attributed to X, Y, and Z in that particular text requires assessment of the rhetorical strategies of the author(s).
They could've just said, if you want to know why the author thinks XYZ are important, you need to look at what they are saying about it.
I'm a hardcore postmodern leftist, but I don't see how writing in such a contorted way helps practicing scientists. In fact I would argue that this kind of listing obscures a politics of its own; it is so busy prescribing citation practices that it won't examine its own politics.
That said, it's the first time I've seen this guide so maybe I need to read up on the issues; a list of do's / don'ts isn't the best way to introduce and help people understand the issues.
"Some" will condemn this ban by appealing to freedom of speech. But freedom of speech is not absolute. Zeid Ra’ad Al Hussein, as per the Guardian:
"I am an unswerving advocate of freedom of expression, which is guaranteed under Article 19 of the International Covenant on Civil and Political Rights (ICCPR), but it is not absolute. Article 20 of the same covenant says: ‘Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law."[0]
As so often, different values are in tension with each other. And different societies draw the line at different places, somewhat favouring one or the other value. I hope we can agree that 8-chan, due to the lack of sensible moderation, is way past that line by all standards.
Edit: to clarify, this is not meant to be a strawman. By "some", I don't mean some here or alike, but those in 8-chan , TD, etc., who have brought forward this argument in the past.
That swerve doesn't apply to the US's actual unswerving protection of freedom of expression, it was ratified with the following reservation (the first among others):
> (1) That Article 20 does not authorize or require legislation or other action by the United States that would restrict the right of free speech and association protected by the Constitution and laws of the United States.
> Every media article I've read cites her team-credit emphasis.
But that is the problem, actually. Simple question: Is Prof Bouman the team leader that held all this together?
In answering this, consider this is from the ERC website, which is the major funding body of the project:
> Since 2014, this six year research project is being carried out by three lead scientists and their teams; namely Professors Heino Falcke from Radboud University Nijmegen (also Chair of EHT Science Council), Michael Kramer from the Max Planck Institute for Radioastronomy, and Luciano Rezzolla from Goethe University Frankfurt. [0]
So to answer "What more should the media do?", my guess would be: at some point, mention Falcke, Kramer, and Rezzolla? And the ERC?
People generally value fairness and crediting people unfairly goes against that. If it appears to you that other members of the team might have contributed just as much or even more but weren't credited accordingly, then that goes against this "instinct", and why shouldn't it? You're missing the point, the question is whether she was unfairly credited or not.
Meanwhile, capitalism is wholly predicated on unfairness, where the "haves" exploit the "have nots" to get as much for themselves as they can.
Why does a woman getting "credited unfairly" strike a nerve when it happens every day that a CEO takes singular credit for an entire corporation worth of people's work with no "Unfair credit!" reaction?
Exactly.
I can take so many examples but one that comes to mind.
The so many articles you read about Steve Jobs and the iPhone.
Did you once say/ask “it was not just Steve Jobs. It was a whole team of people who created the iPhone”?
The media and the public in general give him the credit because he led the team that developed the iPhone.
The first man to walk on the moon. He couldn’t have done it without a whole team of people working before, during and after they landed. Most of the media does not go into detail when they write stories about it.
It’s silly to say that everyone on a team should be mentioned by name in every article that comes out about an accomplishment.
1) is different in quality. possibly 2), too, but I don't know enought about it.
in CH, search engines are state-censored for political reasons - mainly to keep in power a semi-dictatorship.
the right to be forgotten was implemented to protect individual rights. one may or may not agree with such a protection. however, the motivation was not systematic political censorship.
I'm guessing you typo'd CN as CH, as CH is Switerland https://en.wikipedia.org/wiki/ISO_3166-2:CH but if you mean CH (the post you were mention other agency supervision) could you share more?
1) What really bothered me personally about GPT2 is that they made it look sciency by putting out a paper that looks like other scientific papers -- but then they undermine a key aspect of science: reproducability/verifiability.
I struggle to believe 'science' that cannot be verified/replicated.
2) In addition to this, they stand on the shoulder of giants and profit from a long tradition of researchers and even companies making their data and tools available. but "open"AI chose to go down a different path.
3) which makes me wonder what they are trying to add to the discussion? the discussion about the dangers of AI is fully ongoing. by not releasing background info also means that openAI is not contributing to how dangerous AI can be approached. openAI might or might not have a model that is closer to some worrisome threshold. but we don't know for sure. so imv, what openAI primarily brought to the discussion are some vague fears against technological progress -- which doesn't help anyone.
Re 1: GPT2 is no different from most stuffs by DeepMind. DeepMind, in general, does not release code, data, or model. DeepMind does not seem to get reproducibility complaints, supposedly "key aspect of science".
The analogy breaks down, though - in Google's disfavour and in OP's favour.
OP knew that she had something valuable. But she also thought of it as a free good (libre, not gratis). As I interpret it this is because she knew that her work builds on the work of others.
At Google, they probably knew about the intellectual background of OP's innovation, too. And yet, they tried to patent it. So much about their intellectual honesty.
well, except for the fact that it's still a chromium browser and google really sets the general agenda for all chromium browsers.