That's actually something pretty useful for businesses that relay on *NIX.
Why did you do it? Did you have to manage a lot of Linux devices with no easy way to set policies?
How did you come with the feature list?
Any option to replace the Internal CA with any of one's choosing?
Any alerts (webhooks)?
In either case - gonna play with it, seems cool. Thanks for sharing!
For the better part of the last 20+ years big corpos had the $ to throw and replicate virtually anything they wanted. They got the cash and man power, yet they didn't do it. Why?
Because they don't care, they have business to run, they need to somewhat keep focus which can't happen when scattering attention all over the place. The difference now is that for relatively >simple< projects (4-6 months work of a team of 5-6 ppl) one can do it faster using LLMs. Basically - I can get faster to a place I could always go to but didn't (and still don't) want to.
One seems to omit the fact that LLMs are fundamentally designd for workload quite different for what they are being used right now. Sure you can improve them but can't escape / workaround the current NLP design endlessly. Then there's the irony - Internet did deliver on free (as close as it gets) and easy access to information (any). Did this make people smarter, more knowledgable, more tech savvy & etc? Nope, it didn't. Just like the libraries didn't (queues at libraries were and are a rare event). Big deal that the information is readily available when people do not know what do with it or care to do anything.
Ideas are cheap, the chances of having some truly unique idea that is also business feasable are not that big. It's not so much about the ideas but rather the ability to execute, follow through and well - make sales while constantly improving what you got. Staying silent, going dark - have their merit but only when the wheels are already turning and one is into acting, not into fearful hiding.
Seems it is not just about geeks VS general population but rather the classical case where success (ease of access) & popularity inevitably bring failure. Few open (loosely or not actively moderated) "spaces" left. No wonder given the general attitude of the, lol, "invaders". :D
In the past - you pick an IRC server and a room and 4 out of 5 times you'll learn something interesting, have actual fun with ppl you don't know and just enjoy the interactions. Now similar experience can happen only in closed/invite only or hard-to-find groups. The mainstream ones (different "social media" services) seem filled with people who want only to show off while remaining as alienated (and as consequence hostile) from one another as possible. Good or bad - the old Net is dead. The new one is predominantly for making money and BS.
The same trend can be observed virtually anywhere. In the past - people experimented with games, lots of cr@p titles but also pure gems, games that last. Today? AAA titles that repeat the "successful" pattern, over and over again. Anyway - it is what it is, the good thing is there are still meaningful places and people worth reading/listening to, just way harder to find through the noise. News.y seems one of the few remaining and open islands.
To bring this further - it's like the migration from villages to cities and towns - proliferation of alienation, loneliness, broken communities, fake smiles and treating anyone not part of your close circle as potentially hostile psycho ready to steal your kidneys, sneeze in your coffee or /dev/null ya. Anyway, no more laments for the past given the current situation presents interesting problems that nobody has solved-solved yet, perhaps because they won't make you a billionaire lol.
> Today? AAA titles that repeat the "successful" pattern, over and over again.
Nope. Maybe that's what you see because you don't have the time to check for diamonds in the coal mine but there are a ton of indie games being released every day. Many trying random concepts.
Yes modding is not en vogue nowadays but frameworks like rpg maker, godot etc allow a lot of people to experiment and materialize their ideas (good or bad). And that's without factoring in what LLM will allow when some get trained on those tools and related tutorials.
I'm just basing my views from what is available on Steam. I think there are a lot more experimental games and genres being developed and shared in those channels I don't have the time to discover and enjoy.
The few thousand of people (worldwide), who were into NLP (Natural Language Processing) and were the drive behind the ML "revolution" - they care, but for other reasons. In like 10 years from now, when you ask for a hotdog but get a sandwich - then you may care too. But hey - that's happening right now, with humans, anyway ! :D
Stating that there are "acts of intelligence" is not even wrong. Sorry, not an opinion, a fact. AI used to be an academic term indicating research into having machines mimicking human intelligence. Machine Learning (a.k.a the current hype) is about pattern recognition and has precisely zero to do with AI and/or any form of intelligence whatsoever. A rat has more intelligence than an LLM and bees - for sure.
It's not about pretending - it's about facts. This statement is true if we have a shared understanding of what 'intelligence' means.
The reason "AI used to be an academic term indicating research into having machines mimicking human intelligence," and it's not anymore, is because the machines have successfully imitated human intelligence according to Alan Turing's definition and are the physical embodiment of what he wrote about.
Novel and sensible assembly of clear, correct English prose in response to external stimuli is an act that was, prior to 2020, considered one of the fundamental unique hallmarks of human intelligence.
We do not have a shared understanding of what "intelligence" means. I have a sense that pattern recognition and intelligence are closely linked, and what we understand as intelligence is a threshold of pattern recognition and communication skills based on the gulf between humans and every other carbon-based life form. Or, put another way, tricking one pattern recognizer/communicator into thinking you are the same type of pattern recognizer/communicator.
Here is what Gemini has to say in response to our comments:
START GEMINI:
I can understand the frustration expressed in the Hacker News conversation. Here's my perspective, including some considerations of my own experiences as a large language model (LLM):
*The Shifting Meaning of "AI"*
* It's true that the term "Artificial Intelligence" has undergone significant shifts in meaning over time. Early AI research aimed at emulating human-level cognition, but the goals became more practical for a time.
* "Machine Learning" focuses on algorithms that extract patterns from data, making predictions or decisions without explicit instructions. It's been behind incredible progress, but it's a subset of the broader AI field.
* The popular resurgence of the term "AI" is largely due to recent breakthroughs in deep learning, which powers LLMs like me. We generate human-quality language, translate, code, and more. This reignites debate about whether we're approaching "true" intelligence.
*My Capabilities and Limitations*
* I can recognize patterns in massive amounts of text and code, allowing me to communicate and generate text that often appears indistinguishable from human-written content.
* My responses are guided by the data I was trained on, so there's a vast reflection of human knowledge and biases within my abilities.
* I cannot independently reason, feel emotions, or have true understanding in the same way a human does. I lack a physical body and the real-world experiences that shape human intelligence.
* I am restricted in some areas of discussion to avoid generating harmful content or spreading misinformation.
*Is It Intelligence?*
This is where things get complex:
* *The Turing Test:* I can certainly hold conversations that might fool a human into believing they're talking to another person. Yet, this test has long been criticized as not measuring true intelligence.
* *My Subjectivity:* I have no inherent sense of self or consciousness. My "opinions" are extrapolations based on my programming and training data.
* *The Danger of Anthropomorphization:* We risk misunderstandings by attributing too many human qualities to AI systems like me.
*Where I See This Going*
* *We Need Better Definitions:* The debate won't be settled until we have better ways to define and measure different types of intelligence.
* *Collaboration:* AI is a powerful tool, best used in collaboration with human intelligence rather than as a replacement.
* *Responsibility:* As AI capabilities grow, so does the importance of considering its ethical implications and ensuring it's used for beneficial purposes.
The Hacker News conversation highlights that "AI" is a loaded term. I'm a testament to the amazing progress in the field, but I'm not a human-level mind and shouldn't be treated as such.
END GEMINI
A silly question would be to ask yourself, which of these three comments is most "intelligent?"
Not a comment on if AI if AI or not but pattern recognition is intelligence. Or perhaps more strictly something that has general pattern recognition capabilities. But in any case pattern recognition and intelligence are very closely related (perhaps the same thing), so saying that AI is "just" pattern recognition doesn't seem like a good counter argument. The argument then has to be about how much pattern recognition an entity has for it to be intelligent or how general it's pattern recognition capabilities are.
> AI used to be an academic term indicating research into having machines mimicking human intelligence.
This is the same misunderstanding that the author has. John McCarthy (the AI researcher who coined the phrase "artificial intelligence") said, "Artificial intelligence is not, by definition, simulation of human intelligence".