Hacker Newsnew | past | comments | ask | show | jobs | submit | xvector's favoriteslogin

There was a brief period, from the fall of the Soviet Union to Bush's invasion of Iraq, where "rules-based international order" was not a joke, and in fact was taken pretty seriously by quite a lot of people.

Democracy, free trade, free speech and freedom of religion had "won" over the soviet union. International treaties were reducing stockpiles of nuclear and chemical weapons. The WTO had just started resolving trade disputes through negotiation rather than trade wars. International peacekeeping forces were preventing ethnic cleansing in Bosnia and Kosovo, even though there wasn't anything like oil motivating the peacekeeping forces. Planners of the genocides in Yugoslavia and Rwanda were being prosecuted by an international war crimes tribunal.

Then-UK-Prime-Minister Tony Blair believed in this stuff pretty earnestly - in fact he wanted to get a UN resolution authorising the Iraq invasion so badly he was happy to submit fabricated WMD evidence to get it.

Of course, even at the height of the "rules-based international order" there were always some stark inconsistencies - especially in the middle east, for example.


Definitely not. The article does go on to acknowledge this:

"The result of this volume bias in the system is an onslaught of low-quality legislation. Compliance is often impossible. A BusinessEurope analysis cited by the Draghi report looked at just 13 pieces of EU legislation and found 169 cases where different laws impose requirements on the same issue. In almost a third of these overlaps, the detailed requirements were different, and in about one in ten they were outright contradictory."

Whenever I hear a politician patting himself on the back for how many pieces of legislation he got passed, I cringe at the thought of all the junk in it.


I suppose the Paradox of Tolerance gives you blanket permission to be violent against anyone who fails to meet your exact political litmus test? Indeed, Popper’s warning about tolerating the intolerant was intended to guard against existential threats to a pluralistic society, not to license indiscriminate hostility. By extending the label of “intolerant” to encompass nearly all Republican or conservative positions, you transform the paradox into a broad justification for suppressing any viewpoint you oppose.

Moreover, while it is undeniably true that significant social and political progress has required great effort and, in some cases, profound sacrifice, it does not follow that we must now treat all dissenting views as immediate dangers warranting violent reprisal. If anything, the most effective way to preserve the foundations of liberal civil society, such as robust public education, fair labor laws, and equitable treatment for all, is to engage in an open, if sometimes messy, democratic process, rather than to endorse sweeping forms of retribution.

When we equate every policy disagreement with an existential threat, we risk undermining the very civil discourse we claim to protect. Therefore, invoking the Paradox of Tolerance to rationalize violence is far removed from Popper’s original intention and, taken to its extreme, contradicts the core values of a tolerant and inclusive society.


Average age of engineers and scientists in the Manhattan Project was 25.

Our current gerontocracy is ahistorical.

Perhaps one reason startups work so well is they are one of the few places that still let young people exert agency.

The average age of NASA’s mission control team during the Apollo era was 27— they put humans on the moon. Young people bring a force of curiosity and creativity that can disrupt the status quo. If we’re serious about cutting waste in gov spending, let’s not turn away new minds.

The guys featured in this gross and irresponsible hit piece by Wired, by all accounts, are brilliant engineers. Top 1%.

- one decoded the Herculaneum Papyrii at the age of 20, winning the Vesuvius Challenge

- another built a startup funded by OpenAI

- one interned at SpaceX and got a Thiel Fellowship

- another was a top engineer at a major AI firm

This is who they are bullying and putting a target on. The best of us nerds. https://x.com/anothercohen/status/1886480470185001025


My twitter account wasn't big, but it was non-trivial (~30K followers). A post could usually get me to experts on most topics, find people to hang out with in most countries, etc. There were many benefits, so deleting was very hard.

But it was eating my brain. I found myself mostly having tweet-shaped thoughts, there was an irresistible compulsion to check mentions 100 times a day, I somehow felt excluded from all the "cool" parts which was making me miserable. But most importantly, I was completely audience captured. To continue growing the account I had to post more and more ridiculous things. Saying reasonable things doesn't get you anywhere on Twitter, so my brain was slowly trained to have, honestly, dumb thoughts to please the algorithm. It also did something to attention. Reading a book cover to cover became impossible.

There came a point when I decided I just don't want this anymore, but signing out didn't work-- it would always pull me back in. So I deleted my account. I can read books again and think again; it's plainly obvious to me now that I was very, very addicted.

Multiply this by millions of people, and it feels like a catastrophe. I think this stuff is probably very bad for the world, and it's almost certainly very bad for _you_. For anyone thinking about deleting social media accounts, I very strongly encourage you to do it. Have you been able to get consumed by a book in the past few years? And if not, is this _really_ the version of yourself you really want?


I feel your argument relies on assuming that being an optimist or pessimist means believing 100% or 0%, whereas I'd claim it's instead more just having a relative leaning in a direction. Say after inspecting some rusty old engines a pessimist predicts 1/10 will still function and an optimist predicts 4/10 will function. If the engines do better than expected and 3/10 function, the optimist was closer to the truth despite most not working.

Similarly, being optimistic doesn't mean you have to believe every single early-stage invention will work out no matter how unpromising - I've been enthusiastic about deep learning for the past decade (for its successes in language translation, audio transcription, material/product defect detection, weather forecasting/early warning systems, OCR, spam filtering, protein folding, tumor segmentation, spam filtering, drug discovery and interaction prediction, etc.) but never saw the appeal of NFTs.

Additionally worth considering that the cost of trying something is often lower than the reward of it working out. Even if you were wrong 80% of the time about where to dig for gold, that 20% may well be worth it; reducing merely the frequency of errors is often not logically correct. It's useful in a society to have people believe in and push forward certain inventions and lines of research even if most do not work out.

I think xvector's point is about people rehashing the same denunciations that failed to matter for previous successful technologies - the idea that something is useless because it's not (or perhaps will never be) 100.0% accurate, or the "Until it can do dishes, home computer remains of little value to families"[0] which I've seen pretty much ad verbatim for AI many times (extra silly now that we have dishwashers).

Given in real life things have generally improved (standard of living, etc.), I think it has typically been more correct to be optimistic, and hopefully will be into the future.

[0]: https://pessimistsarchive.org/clippings/34991885.jpg


It’s interesting to observe how detached the discussion here is from the issues created for Europe due to the DMA. A majority of the comments here make the implicit assumption that the DMA is good because it will penalize big tech companies and force them to change business models in the EU. This is not what is happening or will happen.

What’s really being destroyed by the DMA is Europe’s access to new technologies and services. It’s almost like a self-embargo on the AI building blocks of their future economy.

When Nvidia GPUs are supply constrained do you really think it matters to Nvidia if they need to redirect the small chunk of their supply constrained volume that they were previously selling into France? Who is harmed in this picture? The only EU AI player of note, Mistral, and other EU businesses.

Does it really harm Apple if the DMA forces them to withhold new AI features in Europe? They still earn their device and services revenues. Who is harmed in this picture? EU consumers and businesses.

We’ve now seen within just a couple months, Apple withhold AI features and Meta withhold multimodal AI models from the EU. Expect this cutting off of the EU from new features to become a recurring event over the next year.

DMA-supporting voices are under a serious misapprehension of what the effect of the DMA is and will be over the next few years. It’s cutting off European access (consumer, business & government) to critical technologies which are all being developed outside the bloc.

The DMA violates a number of longstanding principles of good legislation - it is vague, it’s written to enable arbitrary enforcement, it’s penalties are not designed to be proportionate to damages, or even require actual damages in order to be applied, the regulator’s actions stray into actual takings of property (European Commission opinion that Facebook cannot charge a subscription fee for its ad-free offering… so it must operate as a charity? This is a taking of property. European Commission opinion that Apple cannot charge a platform fee for use of its IP? This is a taking of property).


You need a Mexican prescription though.

Here it is for $211 per month: https://www.farmaciasguadalajara.com/ProductDisplay?urlReque... though that’s only a deal if your insurance won’t cover it in the states.

I bet it’s not in stock.


It's not AI, but drug makers have already opted out of EU countries because of their regulations.

"Drugmakers Boehringer-Ingelheim and Eli Lilly have called off plans to market their Type 2 diabetes drug linagliptin in Germany because new legislation in the country could mean that pill's price could end up being too low."

https://www.reuters.com/article/boehringer-lilly-idINL5E7K23...


> A tech worker in Norway also pays less for healthcare and housing,

A tech worker in SV has healthcare covered and makes so much more than a Norwegian that they can afford SV rents.

> is healthier,

Life expectancy in Norway: 83. Life expectancy of northern Europeans in the US: 83 for men, 85 for women [1]. Wrong to look at life expectancy at a state level since it is based on cultural/ethnic traits and not based on your passport. Same numbers for East Asia - men (85), women (88) which is similar to East Asian countries (eg. Japan @85).

> lives in a society where everyone lives safer and more equally

More equally - not sure if that's a good thing or a bad thing. Depends on the details. Though looking at the number of talented individuals flocking to the US from the supposedly egalitarian countries, it seems US is doing something good.

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5026916/


I was surprised they didn't mention the Pleiades constellation in the Star Stories section. Today, only 6 stars can be seen with the naked eye in the constellation. However, both European and Aboriginal traditions describe them as "seven sisters". It could of course be coincidence, but considering the fact that astronomical extrapolation calculates the movement of the stars to indeed have once been visible as seven, it might actually be evidence of a shared story. The maths puts the last period of the Pleiades appearing as seven, at around 100,000 years ago.

The Pleiades Folklore Wikipedia page has loads of interesting general info about the constellation: https://en.wikipedia.org/wiki/Pleiades_in_folklore_and_liter...

And Youtuber Crecganford has a video with well-sourced academic references exploring the idea that The Seven Sisters is humanity's oldest story: https://www.youtube.com/watch?v=_qyjKND3dAE


Actually, that stock appreciation value IS available to you. You can get a very low interest (sub 1%) loan using your stock as collateral. If you have enough stock (like the very wealthy), you can just keep doing this and paying off the interest with more loans.

Then, when you die, your estate can take advantage of the step up rule. Which means that instead of using the original value of the stock you bought as a basis to calculate your profits, the basis is "stepped up" to the value of your stock on the day you died. After which, your estate can sell the stock and pay no taxes and then pay off the loans you lived off of your whole life.

And that is exactly why companies with founders that own lots of shares are no longer paying out dividends and trying to make a profit. They simply put everything possible back into growing the stock price eternally. Because dividends are taxed. Stock appreciation, if you are rich enough, is not.


To any NIDS newbies out there... Please don't deploy this in-line. Ever. You should always have your IDS out-of-band. Software like Snort is great for detecting threats, but if you block every "threat" you're going to have a bad time. Not to disparage the quality of their rulesets, they are very high quality, but there absolutely will be false positives. I've spent many evenings and weekends troubleshooting IPS problems, 0/10 cannot recommend.

The best option is to mirror all traffic from switches directly to capture boxes where the detection and logging happens. This should be sent to a central system that has a full picture of the network and traffic patterns. That central system should be the one making the decisions, and it should be very smart. Automatic firewall rules should be close to the source, and shutdown switchports should be close to the client.

For safe IPS operation there needs to be several layers of filters, not just a list of "allowed rule IDs". This is the sort of project that takes at least a year to fully roll out - it's not the kind of thing a "security whiz" can set up in an afternoon.

At best, it can be a very useful diagnostic, logging, and threat detection tool. At worst, it can cause very difficult to predict and troubleshoot network problems.


It's not TW/h/yr but TWh/yr.

For reference, 120 TWh/yr is only twice the production of the Baihetan dam https://web.archive.org/web/20160302215312/http://english.cw...

Most people don't picture that an entire "mid sized" country could run off only 2 hydro dams.


strcat is right here. Purism designs and markets their devices, effectively, to cater to a crowd that believes that devices with actual security are inherently evil, because they do not understand it. You can have better security with user control, but for that you need to look at the details. They don't care about the details; their story is all fluff under the guise of freedom and privacy.

Purism's marketing material is outright deceptive, e.g. they insist that in competing phones the baseband blob has access to system memory, which is a lie. The reality is that the baseband blob in the Librem 5 (which is every bit a giant blob as that in the competition) has access to the USB port of the AP and there is no filtering implemented yet, so the attack surface it is exposed to is every USB driver in the Linux kernel, which is much worse than systems with embedded basebands and proper memory firewalling where the baseband has no more inherent access, but is exposed to a smaller attack surface. That means that you are more vulnerable to giant blobs doing evil things with a Librem 5 than with, say, an Android phone running a free OS build.

Then there's the whole hilarious situation with the RAM initialization blob where Purism went and hid it behind two layers of CPUs (not execution, just handling it), because somehow doing that - which provides absolutely no benefit to the user, it's just a waste of engineering time - made it possible to certify the phone under the FSF's utterly broken and nonsensical "Respects your Freedom" program, even though precisely zero freedom was gained by doing this, since it still running the same blob on the same final CPU with the same access. All the while while reducing security, since the blob is then made no longer part of the normal OS image and is not validated with it, so it could be backdoored as part of a supply chain attack and you would be none the wiser.

The whole thing just stinks the more you look into it, and it is completely evident that the folks behind Purism are a mix of deliberately deceiving people and just clueless about security and modern embedded platforms.


GrapheneOS and AOSP are Linux-based and there are no closed source kernel modules. They aren't somehow not actual Linux due to not using systemd, glibc, binutils, GCC, pulseaudio/pipewire, polkit, NetworkManager, GNOME, etc. If that's what you mean, you should say so, because those userspace components are not Linux and not using those doesn't make it any less of a Linux distribution. Is Alpine not a real Linux distribution? Is it only a real Linux distribution if it looks like what you're familiar with? More developers are familiar with Android than the desktop Linux software stack. More work goes into it. Far more apps are written for it, and that includes a very active open source app ecosystem.

Sticking to an LTS kernel branch for the lifetime of the device isn't due to anything closed source. GrapheneOS only supports devices with proper security support for all the firmware, drivers, etc. and again there are no closed source kernel drivers. We can support pretty much any mobile device with alternate OS support since any serious one will have AOSP support. Most devices have lackluster security and don't meet our requirements. We're working with a hardware vendor to get a non-Pixel phone actually meeting reasonable security requirements.

Librem 5 has a bunch of components where they are not shipping updates. You have things very much backwards on that front. The Librem 5 does not come close to meeting the security requirements to run GrapheneOS. It has a bunch of poorly secured and insecurely configured legacy hardware often without proper updates available, components that are not properly isolated via IOMMU, no secure element or all the stuff that comes along with that (HSM keystore with a nice API used by apps, Weaver to make disk encryption work for users without a high entropy passphrase like 7 diceware words, insider attack resistance, working attestation not depending on hard-wiring hashes and a lot more) and many other things. The OS they use has a near total lack of any systemic overall privacy/security work or privacy/security model and only falls further and further behind. The most exciting feature for securing devices right now is hardware memory tagging support in ARMv9, but there are years and years of tons of important privacy/security work done in a systemic way across hardware/firmware/software which are missing there before worrying about stuff like that.

Marketing something as private/secure and spreading tons of misinformation and outright lies about the mainstream options doesn't make it secure or more secure than those. It's actually pretty funny that they mislead people about the isolation of hardware components like the cellular baseband in other devices when the vast majority of mainstream phones (iPhone, Pixel, Qualcomm SoC devices, Exynos SoC devices) have it done quite well when they don't. Strange that they get away with these games of misrepresenting things, hiding the fact that they still have entirely proprietary hardware and near entirely proprietary firmware for the SoC and other hardware components, etc. Hiding proprietary stuff doesn't make it go away. Not updating it doesn't make it go away and simply ensures a highly insecure device.


Good writup. One thing I would add for bastions if you wanted to harden them would be to disable session multiplexing if you are using MFA/2FA.

  MaxSessions 1
The default is 10. The plus side of multiplexing is that subsequent connections using the same ssh connection channels are not validated against the authorization mechanisms such as login or 2FA. This reduces friction and speeds up the login process because login is not actually occurring. The trade-off of multiplexing is that all subsequent logins using that ssh connection are not logged nor are they validated with MFA. This means a person phishing your team members can easily hijack their connections without needing a password or 2FA and there are no lastlog entries. SSH Session multiplexing combined with passwordless sudo makes taking over a company trivial even if they have 2FA and strong passwords.

Another risk with a bastion model is port forwarding. As an organization you have to decide what is appropriate for that bastion. Unrestricted forwarding? Restricted? Denied?

  AllowAgentForwarding                    no
  AllowTcpForwarding                      yes
  PermitOpen                              192.168.1.2:22
If this bastion is for a PCI environment then one may want tighter restrictions. If it is for a development environment then maybe less restrictions and just better auditing on each host to enable forensic remediation.

If your bastion is also used for automation to drop files into a staging area, you can limit that automation to file transfers and even limit what it may do with files. This prevents the automation from having a shell or performing port forwarding.

The keys should be outside of the home directories to prevent malicious tools from appending additional authorized_keys into the account. Make use of automation to manage key trusts and add a comment to keys to map them to an internal tracking system like Jira. This assumes your MFA/2FA is excluding specific accounts or groups via PAM and permitting the use of ssh keys with specific groups or accounts.

  AuthorizedKeysFile               /etc/ssh/keys/%u

  Match Group                      sftpusers
        Banner                     /etc/ssh/banner_sftp.txt
        PubkeyAuthentication       yes
        PasswordAuthentication     no
        PermitEmptyPasswords       no
        GatewayPorts               no
        ChrootDirectory            /data/sftphome/%u
        ForceCommand               internal-sftp -l DEBUG1 -f AUTHPRIV -P symlink,hardlink,fsync,rmdir,remove,rename,posix-rename
        AllowTcpForwarding         no
        AllowAgentForwarding       no
-P sets limits on what may not be done in sftp. -p does the inverse and limits what may be done. [1] -l DEBUG1 or VERBOSE will give you syslog entries of what commands were executed on the files. This is useful for audits. Some redundant settings above are also useful to set explicitly for audits.

Another thing mentioned in the article is iptables. In a PCI environment one may want to also have explicit outbound rules using the owner module to limit what users or groups are permitted to ssh out. So if your organization have a group of people allowed to use this host as a bastions, then one could write a rule like

  iptables -I OUTPUT -m owner --gid-owner devops -p tcp --dport 22 -d 192.168.0.0/16 -j ACCEPT
Or specify what CIDR blocks, ports, protocols may be used. You can use REJECT rules after this rule to make it obvious a connection was not allowed so that people do not spend hours debugging. This module is also handy for limiting which daemons may speak to your infrastructure. How strict or liberal the rule is entirely at the needs of your organization.

Lastly I would add that bastions should have as minimal an OS install possible and have SELinux enforcing. Actions denied by SELinux should go to a security operations center after you spend some time tuning out the noise and false positives.

[1] - https://man7.org/linux/man-pages/man8/sftp-server.8.html


Yeah, I didn't realize what an enormous difference this made until I ran the numbers.

In your example above, let's say the person purchased those 10 Meta shares for $38 each at the IPO and they're worth $322 each now. That's $3220 in proceeds and a $2840 capital gain.

The taxes on this depend on income level and state of residence, but let's say they're in CA making $300K/year. They'll pay 20% federal capital gains tax + 3.8% net investment tax + 10.3% CA income tax, or $968 in taxes, and they're left with $2252.

On the other hand if they donate the shares to a charity (or DAF), they get a tax deduction for the appreciated amount ($3220), which can be taken against 35% federal income tax + 10.3% CA income tax = $1459.

So in the scenario where they just sell the shares, the proceeds after taking taxes into account are:

  Donor     $2252
  Charity      $0
And in the scenario where they donate the shares, they are:

  Donor     $1459
  Charity   $3220
In other words, for an effective cost to the donor of $793, the charity gets $3220.

So I'm going to rehash somewhat some comments I've left on similar threads to this. The thought on this apparent lack of alien technological civilizations has advanced a lot since 1960 (when SETI began; from the article). I still see a lot of people stick to ideas that really don't hold up to scrutiny. The two core issues are:

1. We will find alien civilizations by their radio signals; and

2. The limiting factor on alien civilizations are Earthlike planets.

(1) is the flaw in SETI. The general argument against this goes something like this:

1. Planets are efficient ways to store matter (in that gravity binds it together) but are highly inefficient in creating living area or collecting a star's energy;

2. Entering and leaving a gravity well is expensive (in energy terms);

3. Gravity can be trivially replicated with centrifugal force using materials we already have (eg stainless steel).

4. Within 1000 years we will be capable of building space habitats that solve the above 3 problems;

5. As a consequence of the above, the natural tendency for any growing civilization will be to encompass a star with orbiting habitats.

6. Artificial structures orbiting a star, even if they're just energy collectors, would be incredibly obvious to any observer from a huge distance and would be far easier to detect than radio transmissions similar to what we do today. The tl'dr of this is that the only way to get rid of heat in space is to radiate it away. The signature of this is a function of the temperature of the object and for a huge range of temperatures this is in the IR spectrum and this huge IR signature without a corresponding visible light output is what makes it "obvious".

The first radio transmission occurred 120 years ago. In 1000 years (and possibly well before them) we'll be beyond them in terms of the signature alien civilizations would detect (in that, they'd see our IR signature first). This is of course the blink of a cosmological eye so what we're really doing is hoping to see civilizations who are at the same point we are, which is incredibly unlikely.

This ties into (2). I even saw Neal degrasse Tyson make this mistake in a talk where he posits the answer to the Fermi Paradox is that civilizations run out of planets and die out.

Planets are likely important as a cradle of life and the number of suitable planets is likely a filter of some sort but irrelevant in the long term. A star with no planetary system at all can easily be a home to a Kardashev-2 civilization. Matter can be extracted from stars directly (as a side note, extracting Helium to avoid the star going supernova is likely a useful side benefit) and/or constructed with particle accelerators (and all that energy).

What I find compelling about this argument is that it's /almost/ just an engineering problem (albeit a huge one) rather than one requiring huge advances in technology and/or "new physics". Steel is sufficient to build O'Neil cylinders and solar power is a sufficient energy source for all this (ie it's not predicated on the viability of commercial fusion power generation, which I for one am not yet convinced of, and if it is viable everything gets a whole lot easier).

The other attractive part of this is how it can be done.in a piecemeal fashion, meaning you can build one orbital, the another, and another and another. Certain other megastructures are much more "all or nothing" (eg ringworlds).

The above is the classic Dyson Swarm, originally called a Dyson Sphere but Swarm tends to be the common name now as a bunch of people have assumed (incorrectly) that a Dyson Sphere is a rigid sphere. Dyson never suggested that and no known or even theorized material could support a rigid sphere that large.

People tend to hone in to one issue or another with the above. A popular one is about waste heat. "What if they recycle it?" So with perfect recycling they've violated thermodynamics so we're just talking about increasing efficiency, at which point you've just reduced your IR signature, not eliminated it.

The natural consequence of all this is that we are relatively alone and likely the only technological species at or above our technology level within our cone of effect in the Milky Way.


I know this is mean of me, but the student who was taking her Computer Science exam and who apparently thought that changing the file’s extension would change format? Yeah... she deserved to fail.

Treasonous.

I don't buy their approach here unfortunately. They showed that they pushed the whole product without one person raising this as an issue easier. Even though this is the most common problem with block lists in real world: the LGBT sites and sexual health information sites. "We fixed it, it's ok now" - let's see how long that lasts / what's the next category to get impacted - I haven't seen any fixes in the process / validation described in the post. (I mean validation of their sources/approach in the future beyond a spot check on regressions)

Could you explain the principles of these softwares?

Do they basically automatically download the newest pirate movies using BitTorrent protocol? Or there is more?


Wouldn't it make more sense to make masks and other PPE first?

You don't need a ventilator if you don't get sick in the first place, masks are much faster to put into immediate production, and impacting the curve earlier is going to have a much bigger effect than later.

I mean make everything. But I have family members working in hospitals right now and they are asking me to search the internet and find masks for them. Not ideal.


The Expanse S4 was going off the fourth book, I believe. I didn’t like it as much as the last two seasons, but it’s supposed to be a setup season for what comes in the next couple books.

Also, Chernobyl was fantastic and Watchmen was delightful, for the most part. And Evil is an interesting supernatural/cutting edge Tech mix. Then there’s the final season of Mr. Robot.


I dug around a little and installed their demo app in the iOS simulator: https://www.dropbox.com/s/hl3nk8jrcjleurj/hippy.mov?dl=0. Visually, the only interesting part of the demo app is the RefreshWrapper example

Code-wise, the most interesting things I've seen:

- They expose wrappers for native recycling list views[0]. React Native does this in JavaScript through VirtualizedList, however some have experienced performance issues with it[1]

- They wrote their own flexbox layout library[2] (likely based on Yoga)

- It works with both Vue[3] and React

- Hippy supports web as a build target out of the box[4] (react-native-web is a 3rd party library)

- Touch events work on the `<View />` component directly, instead of needing to wrap `<View />`'s in the `<Touchable />` components[5]

- It uses a closed-source fork of libv8 on Android called X5[6].

The API & coding style is quite similar to React Native, but the implementation seems different. I'm guessing this started as an internal fork of React Native and turned into a large refactor, but that's just a guess

[0]: https://github.com/Tencent/Hippy/blob/master/ios/sdk/compone...

[1]: https://github.com/facebook/react-native/issues/13413

[2]: https://github.com/Tencent/Hippy/tree/master/layout

[3]: https://github.com/Tencent/Hippy/tree/master/packages/hippy-...

[4]: https://github.com/Tencent/Hippy/tree/master/packages/hippy-...

[5]: https://github.com/Tencent/Hippy/tree/master/packages/hippy-...

[6]: https://github.com/Tencent/Hippy/issues/9#issuecomment-56822...


I don't think there's any good solution to the dead link problem. For example there are 11 links in this article:

  https://jeffhuang.com/
  https://gomakethings.com/the-web-is-not-dying/
  https://archivebox.io/
  https://webmasters.stackexchange.com/questions/25315/hotlinking-what-is-it-and-why-shouldnt-people-do-it
  https://goaccess.io/
  https://victorzhou.com/blog/minify-svgs/
  https://evilmartians.com/chronicles/images-done-right-web-graphics-good-to-the-last-byte-optimization-techniques
  https://caniuse.com/#feat=webp
  https://uptimerobot.com/
  http://www.pgbovine.net/python-tutor-ten-years.htm
  http://jeffhuang.com/designed_to_last/
How many of these will still be alive in 10 years? How many times do you have to fix your page to make your page "last"?

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: