seems like its an education or media-bias problem more than a healthcare problem.
If a majority of the poor in America are not educated enough to understand how public healthcare will benefit them and the country, or are educated enough to understand but not educated enough to realise media outlets have a politcal agenda and might not be reporting accurately, then public healthcare is still not going to happen.
(and they might not want socialism, but it seems like they dont understand that democracy has problems too. and healthcare is the living example. but again, it circles back around to being educated enough to understand that democracy is good, but it isnt perfect, and no system is)
See, even someone who should know better gets the argument confused. Socialism is not opposing democracy. Much of Europe is more socialist than the US and have democracies as strong or stronger than the US.
you can roll eyes at me all you want, but I've been programming in C++ for a long time. These memory access issues just don't seem to be a big problem for us in practice. That's because we wrap all raw memory manipulation in appropriate classes for our application, so it's just not an issue. I agree it could be an issue in theory.
He rolls his eyes at "hierarchies". Libraries do make the difference.
Somebody else interjected Design Patterns. You can define a design pattern as a weakness in your language's ability to express a library function to do the job.
> I've also written plenty of C++ code without memory bugs.
The classic response to this is "That you know of." Consider that even quality-conscious projects with careful code review like Chrome have issues like this use-after-free bug from time to time.
So when people claim that they personally don't write memory bugs I tend to assume that they are mistaken, and that the real truth is that they haven't yet noticed any of the memory bugs that they have written because they are too subtle or too rare to have noticed.
That post describes two vulnerabilities: one is in the JIT, but the other one is in regular old C++ code. More generally, JIT bugs are a relatively small minority of browser vulnerabilities. More often you see issues like use-after-free in C++ code that interacts with JS, such as implementations of DOM interfaces, but the issues are not directly JIT related and would be avoided in a fully memory-safe language.
Chrome, like Firefox, is not an example of modern C++ code. Google's and Mozilla's coding standards enforce a late-'90s style. It is astonishing they get it to work at all.
In this case, I mean a subsystem that has been in production since 2006 and has been processing hundreds of thousands of messages a day. I don't claim that it's perfect or bug-free, but if it had significant memory errors I'd have heard about it. I designed and implemented it to use patterns like RAII to manage memory, and it's worked quite well.
When I worked on a mobile C++ project at Google, we went exceptionally out of our way to avoid memory issues.
We ran under valgrind and multiple sanitizers (and continuously ran those with high coverage unit and integration tests). We ran fuzzers. We had strictly enforced style guides.
We still shipped multiple use after frees and ub-tripping behavior. I also saw multiple issues in other major libraries that we were building from source so it can't be pointed at as just incompetency on my team.
Basically, it might be possible but I think it's exceptionally more difficult to write memory safe C++ than this thread is making it sound.
Writing memory safe programs in C++ is possible. Most coding styles and some problem domains don't lend themselves to it naturally, though. In my experience, restricted subsets used for embedded software vastly reduce the risk of introducing errors and make actual errors easier to spot and fix.
> Writing memory safe programs in C++ is possible.
Everything "is possible" in the sense that in theory you can do it. But if time and time again people fail to do it. Even people who invest almost heroic levels of effort (see above: valgrind, multiple sanitizers, and so on) you get to the point where you have to accept that what is possible in theory doesn't work in practice.
I have seen it done in practice, on rather large systems. But it requires actual, slow software engineering instead of the freestyle coding processes that are used in most places.
My main rule is "no naked new," meaning that the only place the new operator is allowed is in a constructor, and the only place delete is allowed is in a destructor (unless there's some very special circumstance). This style lends itself to RAII. The other rule is to use the standard library containers unless there's a very good reason not to do so. That seems to cover most of the really basic errors.
A type system changes the need for test coverage because it eliminates whole classes of bugs statically that would need an infinite amount of tests to eliminate dynamically.
That leaves an infinite amount logic bugs to be tested for. Types cannot fix interface misuse at integration and system level. So no, this does not reduce the need for testing.
Whether they reduce the need for testing overall is arguable. But what matters in this discussion is that types can guarantee memory safety, meaning that the cases that you forgot to test – and there will always be such cases, no matter how careful you are (just look at SQLite) – are less likely to be exploitable.
Types can only provide limited memory safety. There is a real need to deal with data structures that are so dynamic as to be essentially untyped. Granted, this usually happens in driver code for particularly interesting hardware, but it happens. Also, I have not yet seen a type system that is both memory safe and does not prohibit certain optimizations.
I haven't written c++ seriously for a number of years. Do you still have to do all that "rule of three" boilerplate stuff to use your classes with the STL? Is it better or worse now with move constructors?
It's a bit better with C++11 syntax where you can use = delete to remove the default constructors/destructors, e.g.:
class Class
{
Class();
Class(const Class&) = delete;
Class& operator = (const Class&) = delete;
~Class() = default;
};
Which I find slightly cleaner than the old approach of declaring them private and not defining an implementation, but the concept hasn't changed much. I'd love a way to say 'no, compiler, I'll define the constructors, operators, and destructors I want - no defaults' but that's not part of the standard.
Move constructors are an extra that, if I remember correctly, don't get a default version, thankfully.
So, so much better. Nowadays we "use" what has been called "rule of zero". Write a constructor if you maintain an invariant. Rely on library components and destructors for all else.
The comparison in that link is pretty meaningless; it scores languages by how many vulnerabilities have been reported in code written in them, without even making an attempt to divide by the total amount of code written in them, let alone account for factors like importance/level of public attention, what role the code plays, bias in the dataset, etc.
You're misrepresenting the report in order to justify your bias. Direct quote from the report:
This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.`
In other words the report explains this with 1) there being more C code in volume and 2) more C code in security-relevant projects (which are reviewed more by security researchers). It also states explicitly that your conclusion is not to be drawn from this.
> This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.
Please, never ever use code snippets for quotes, unless you hate mobile users. Just put "> " in front.
or just period. I'm reading this on a 4K desktop display, and I still have to scroll. it's only useful for actual code, which is very rarely posted on hn.
To pile on, gmail is also well-known to have fantastic spam and phishing prevention abilities compared to most other email solutions, something you could literally never do at home yourself without some incredible software that you run yourself (and even then probably not). That's a huge value-add for a ton of users I'd imagine.
'Nothing to gain' is incredibly hyperbolic I think or this person has never read about gmail/used gmail before.
Convenience and security are often at odds with one another. Not that Gmail search is any good but you can't search through encrypted mail, as an example.
When you have a checkbox for every possible customisation option, you end up with a commercial airliner cockpit - hundreds of flashing lights and switches, that you need a comprehensive manual and years of experience to properly operate.
It's clearly not the case when the checkbox you adding is an option to turn off some feature enabled by default.
No, not the feature all other components do need so much you have to work around by adding another zillion lines of code in other places of your program to deal with the case it turned off.
In fact, in modern aircraft, they're not flashing. The trend today is to not light a light unless it's important to pay attention to it. It's called "dark cockpit".
> The cluster is actually a great study in UX design.
Agreed. But it's UX design for power users clearly.
But I was talking about the way to combine 'a regular users UX' with a ton of options you need for power users. The vast majority of the people who have Ubuntu on their laptops never recompile the kernel or even know about sysfs, the vast majority of the Windows and Office users never touch the registry editor - but removing them would be a huge mistake.
I wonder if we’re not already at this point though, and it might not be a bad thing.
Not everyone uses a browser the same, but a decent part of people here will spend their working life in the browser. I think for people here it won’t be rare to have dozen of windows with each dozen of tabs, some logged in different acconts within the site they show, some in incognito, some with in developer mode.
Even with just the browser filling multiple windows worth of buttons and stuff to interact with is easy, without even going to hidden preferences and configs.
I’d argue in complexity level we’re already on par with a airliner cockpit, it’s our job to deal with that, and we do it professionaly for years. Of course not everyone needs that complexity, but at least we do.
What I am getting at is, I think we should accept we’re not a t the point where it is simple anymore, embrace the complexity and give tools to effectively manage it.
Airliner cockpit are so because it’s efficient to have individual switches to important action and state indicators. We shouldn’t shy away from showing important info in the interface just because we’d end up with more stuff. Having it hidden can be a worse tradeoff.
The problem is that you're in the 0.001% of people who want their software complicated like an airline cockpit.
Most people don't give a shit, they just want to check their gmail and couldn't care less. They don't even read the alert boxes that do pop up. They just click almost anything blindly.
As a result, companies get away with dark patterns and privacy-compromising changes like this.
I'm somewhere in the middle. I don't want airline-cockpit controls, but I do want the ability to not sync to the cloud/NSA if I want.
I also don't want to be tricked into syncing by some dark pattern silent update that makes an ambiguous clickbox that doesn't clearly say what the privacy implications are either.
Yes, I think the middle ground will be the majority. All the more so actual “casual” users that want stuff that “just work” will use their phone or tablet, or the default browser already installed and is good enough.
Chromebooks are in an interesting position, with a chance to have newer users. But then they will literally live their life in the browser.
In that sense, Chrome users are already set apart I think.
if you are operating something as complex and dangerous, you need the Comercial airliner cockpit! it's that way for a reason! or would you rather board a 747 with a huge colorful button "fly" and another "land"?
if you have something like a browser, that is your last line of defense accessing online banking etc, you need to see into the miriad of options. if you just browse facebook, use the default and be happy. Knowing about:config shouldn't be a gate keeper to anything! going to settings then advanced should be more than enough to communicate the concept of advanced options. anything different you are just being an entitled , incompetent UX designer.
about:config is lazy, but get some of the job done. your oh-so-perfect two options google chrome setting page, is lazy and useless.
It just doesn't display your own answers (relative to the static correct answers), giving the impression that there's no correlation between what you attempted, versus the actual expected answers.
Also, I agree that "X" usually is an indicator of a "wrong" answer. And why place check marks (⍻) across all of the other answers, when check marks ordinarily indicate correct/approved answers?
It seems like a bug. It would make sense if there were NOTHING for incorrect answers left unmarked, an X for YOUR unmarked correct answers or marked incorrect answers, and check marks for the answers you marked off correctly.
Given that it's a quiz, people want to compare their answers to the expected answers, so I think they need to create this logic, or patch whatever bug is preventing the answer markings from rendering properly.
Blue lines are the 'correct' answers, they are the MVP features of each product.
Next to each line you get a tick or a cross depending on whether your answer matches theirs. For instance if you got a tick next to a line that was not blue, then it means you agreed with them that it was not an MVP feature.
There are already apps/technologies that transmit information through audio at frequencies not audible to humans. It should be trivial to adapt this so that if two AI systems are interacting they can perform an "AI-handshake" in the audio at the start and then switch to a more efficient form of communication.
Correct. There are several levels at which this applies:
Phone hardware (microphones, speakers) are only calibrated to detect 'useful' frequencies for human speech.
The sampling rate used by audio codecs tend to cut off _before_ the human ear's limits e.g. at 8kHz or 16kHz. They aren't even trying to reproduce everything the ear can detect; just human speech to decent quality.
Codecs are optimized to make human speech inteligible. The person listening to you on the phone isn't receiving a complete waveform for the recorded frequency range. The signal has been compressed to reduce the bandwidth required, where the goal isn't e.g. lossless compression; it's decent quality speech after decompression.
It's completely possible to play tones alongside speech that we won't notice, but in the general case, not tones that the human ear can't detect.
Exactly my thoughts. I'm currently using a different podcast app that has a neat "fast forward 30 seconds" feature that I only ever use to skip ads.
It's annoying enough having to get my phone out of my pocked to press the fast forward button. If the ads were unskippable, I'd switch apps immediately.
Of course not. But it's extremely clear to me that the primary focus of the article is promoting the game, and the story is secondary.
After having seen a lot of these articles, it becomes easier to distinguish the real story ones from the "ad" ones. And thats why they do it, its effective, so many people can't tell its a paid advert.
I sincerely doubt this is a marketing piece. If anyone's to blame here for that feeling that some may have, it's Kotaku. Jason was one of the most genuine and wonderful people I've met while working at the same co-working space. In fact, all of the people I met in my time at the Gamenest co-working space were quite memorable.
Imagine a new sport called zoccer. It's just like regular soccer, but if a team scores 5 goals within the 90 minutes, they automatically win the match.
Right now there are only a few small teams playing. But Nike has said they are willing to sponsor a tournament, if there is enough interest.
At any point, Barcelona or Manchester United or Real Madrid or Chelsea or AC Milan or any other large soccer team team can just say "we are now a zoccer club" and swoop in take Nike's money.
They all have the power to switch and dominate the new scene.
If a majority of the poor in America are not educated enough to understand how public healthcare will benefit them and the country, or are educated enough to understand but not educated enough to realise media outlets have a politcal agenda and might not be reporting accurately, then public healthcare is still not going to happen.
(and they might not want socialism, but it seems like they dont understand that democracy has problems too. and healthcare is the living example. but again, it circles back around to being educated enough to understand that democracy is good, but it isnt perfect, and no system is)