Hacker Newsnew | past | comments | ask | show | jobs | submit | ecocentrik's commentslogin

Flash should have transitioned into an authoring tool for SVG + CSS + JS but it just took a knee because so many people hated flash for all of its warts by the time SVG and Canvas moved vector graphics rendering to the browser. Flash was a real pain the ass for most web users and Web 2.0 technologies did kill it.

> Flash should have transitioned into an authoring tool for SVG + CSS + JS

Didn’t it? IIRC, Adobe had such a tool at some moment, and part of it seems to (somewhat) live on [1]. https://en.wikipedia.org/wiki/Apache_Flex:

“Apache Flex, formerly Adobe Flex, is a software development kit (SDK) for the development and deployment of cross-platform rich web applications based on the Adobe Flash platform. […] Adobe donated Flex to the Apache Software Foundation

[…]

In 2014, the Apache Software Foundation started a new project called FlexJS to cross-compile ActionScript 3 to JavaScript to enable it to run on browsers that do not support Adobe Flash Player and on devices that do not support the Adobe AIR runtime. In 2017, FlexJS was renamed to Apache Royale. The Apache Software Foundation describes the current iteration of Apache Royale as an open-source frontend technology that allows a developer to code in ActionScript 3 and MXML and target web, mobile devices and desktop devices on Apache Cordova all at once”

[1] I may be wrong though. It’s not easy figuring out what Flash code ended up in which of Adobe’s Flash-like products over time.


I think the problem might actually be with reenforcing the red lines. The events of the last few weeks and this new deal only make sense if Anthropic was trying to find out how Palantir and the Pentagon had circumvented their restrictions to attempt to reenforce those restrictions like company actually concerned about the misuse of their product. OpenAI most likely came in with assurances that they wouldn't attempt to reinforce their restrictions.


Isn't the story here that the DOD is pressuring Anthropic and others to enable their AI for this specific use and for now Anthropic and others are saying no while the DOD threatens them with penalties.

We desperately need real AI safety legislation.


AI safety legislation is for the masses, not the government. Eventually they will get full AI safety by banning all general purpose computing. All apps must exist within walled garden ecosystems, heavily monitored. Running arbitrary code requires strict business licensing. Prison time for illegal computing. Part of Project 2025 playbook.


No. I'm suggesting there should be AI safety regulation to limit how AI can be used by the government. It's new tech and it pays to be cautious and restrict usage in areas like nuclear missile launch and domestic surveillance.


Regulation is also for the government. If some morons stop to follow the constitution they stop being your government.


Doesn't this run into the same bottleneck as developing AI first languages? AI need tons of training material for how to write good formal verification code or code in new AI first languages that doesn't exist. The only solution is large scale synthetic generation which is hard to do if humans, on some level, can't verify that the synthetic data is any good.


In the US they also get scanned and stored.


My advice: There's always at least one crypto scammer telling you to hold through the dip.


I hear there’s always money in the banana stand.


Given the choice between a 2000 acre banana plantation and 400 bitcoin. I would choose the banana plantation with full confidence that I would get a better return from bananas over the next 20 years.


What can it cost, $5?


My advice... Take a time machine back to 2009-2012 & only invest %100.

Otherwise it's too late.


I agree. Agentic use isn't always necessary. Most of the time it makes more sense to treat LLMs like a dumb, unauthenticated human user.


Mississippi? I bet it's a flyover state with a tiny sliver of road that sees massive trucking volume.


It's gonna be California (but I'm guessing, not sure). Other states just defer to federal regulation.

That they don't put the state on blast sort of points to the big cost not being entirely real (where they either think they can induce regulatory change or the number of tests that is needed to sell the systems is quite a lot less than the number of tests that would be needed to allow 100% of the market to use their system).


mississippi doesn't make people do certifications lol. unless you drive a hybrid, then you pay the hybrid tax.


Eh. Discovering how neurons can be coaxed into memorizing things with almost perfect recall was cool but real AGI or even ASI shouldn't require the sum total of all human generated data to train.


There was a period of time where Wikipedia was more scrutinized than print encyclopedias because people did not understand the power of having 1000s of experts and the occasional non-experts editing an entry for free instead of underpaying one sudo-expert. They couldn't comprehend how an open source encyclopedia would even work or trust that humans could effectively collaborate on the task. They imagined that 1000s of self-interested chaos monkeys would spend all of their energy destroying what 2-3 hard working people has spent hours creating instead of the inverse. Humans are very pessimistic about other humans. In my experience when humans are given the choice to cooperate or fight, most choose to cooperate.

All of that said, I trust Wikipedia more than I trust any LLMs but don't rely on either as a final source for understanding complex topics.


> the power of having 1000s of experts and the occasional non-experts editing an entry

When Wikipedia was founded, it was much easier to change articles without notice. There may not have been 1000s of experts at the time, like there are today. There's also other things that Wikipedia does to ensure articles are accurate today that they may not have done or been able to do decades ago.

I am not making a judgment of Wikipedia, I use it quite a bit, I am just stating that it wasn't trusted when it first came out specifically because it could be changed by anyone. No one understood it then, but today I think people understand that it's probably as trustworthy or moreso than a traditional encyclopedia is/was.


> In my experience when humans are given the choice to cooperate or fight, most choose to cooperate.

Personally, my opinion of human nature falls somewhere in the middle of those two extremes.

I think when humans are given the choice to cooperate or fight, most choose to order a pizza.

A content creator I used to follow was fond of saying "Chill out, America isn't headed towards another civil war. We're way too fat and lazy for that."


Even ordering a pizza requires the cooperation of a functioning telecom system, a pizza manufacturer, a delivery person, a hungry customer...


Sure but I hope you get my point. Fighting takes effort, cooperation takes effort. Most people have other things to worry about and don't care about whatever it is you're fighting or cooperating over. People aren't motivated enough to try and sabotage the wikipedia articles of others. Even if they could automate it. There's just nothing in it for them.


The opposite of love and hatered is apathy.


For better or worse, it's also what makes for reliable systems.


> "They imagined that 1000s of self-interested chaos monkeys would spend all of their energy destroying what 2-3 hard working people has spent hours creating instead of the inverse."

Isn't that exactly what happens on any controversial Wikipedia page?


There's not that many controversial topics at any given time. One of Wikipedia's solutions was to lock pages until a controversy subsided. Perma-controversy has been managed in other ways, like avoiding the statement of opinion as fact, the use of clear and uncontroversial language, using discussion pages to hash out acceptable and unacceptable content, competent moderators... Rage burns itself and people get bored with vandalism.


It doesn't always work. There are many topics that are perpetual edit wars because both (multiple) sides see the proliferation of their perspective as a matter of life and death. In many cases, one side is correct in this assessment and the others are delusional, but it's not always easy to align the side that's correct with the people who effectively control the page, because editors indeed do have their own biases (whether because of ideology, a philosophy, a political party, a nation, or whatever else). For those topics, Wikipedia can never be a source of "truth".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: