Not sure we all have the same definition of what successful means... pretendig to know at a high technical level what happens in the 5+ companies you are the ""CEO"" of, while fighting trolls on twitter all day and having children with 4 different women (that we know of), children of which at least one doesn't want to know anything from you and all this while switching your stance on politics by 180 degrees because it serves your business well...?
I mean, the system in which we live right now has a very particular Objective Function that optimizes for money and like in the social networks case where they realized the system shaped towards the objective function without considering the well being of its members, it optimized for screen time through extremism, hate speech and fake news... in the same way our current societies don't favor the nicest, most caritative, nor even smartest members; it favors the ones who are willing to cheat, lie, step over, dehumanize others... all for the sake of "being successful".
Good thing for the planet, as they say there, the world population quadrupled in only 100 years and we are worried we are shrinking a little? I mean c'mon. Humanity can be a 10th of what it is now and we will still be able to "progress" or whatever we call multiplying like the plague...
Yeah the economic and social systems might suffer and might need to change and people will have to figure out new ways to live but it at the end will just be part of this blip of time in the universe, let's just hope we won't end up like in the rats experiment up to the point of extintion https://en.m.wikipedia.org/wiki/Behavioral_sink
This, it is like when I hear interviews of PHDs talking about AI and they mention something like "AI will be smarter than humans", I am like "really?, where have you been all this time?, do you smart people ever leave your labs and go see the real world?, LLMs are already smarter that the huge majority of Humans in this planet, what are you talking about?"
I know it won't be in the holistic or deep philosophical way but even by just predicting the next token without any real sense of the real world LLMs are already capable of simulating basic reasoning which a lot of people lacks, I mean even the 7B llama2 model can tell you the Earth ain't flat... go figure.
They are starting to be smarter at both analyzing images and speech as well. They’re still behind on simple reasoning (eg. O1-preview), but it’s catching up quickly.
Obviously these models still have trouble interfacing with the real world.
I think the answer to this question might actually be yes, but I think there are plenty of things humans can do while walking that AI can't do at all. At least, not yet.
This is awesome, it was pretty easy to set up and start using it.
I have just one question/note to make: I tried a book in the Mexican Spanish language and noticed that it fails to catch the accents on the words (emphasis on words with tildes and strong accents on that syllable) and I am thinking it is because of the .pdf parsing since the Piper Voice Sample on their webpage example does it properly (on both avbailable voices).
Do you have an idea of what could exactly be happening and how I can try to solve it?
Thank you very much for the tool again!!!
Update: Ohh ok I just checked the repo Issues and found the one about polish accents, I tried "--speak-diacritics" but got the same "Error: failed to read file passed as input to piper: read /tmp/ebook-convert-xxxxxxx.txt file already closed". If I skip the diacritics option it converts fine.
Update 2: I went to look at the code and although I have never done anything with Go I was pleased with how easy it is to read plus your code was pretty well structured.
I realized the removal of diacritics was happening at the function RemoveDiacritics inside lib/textProcessing.go on line 26 and modified the definition(?) to not modify special characters, compiled again and voila! It worked great.
After that I used Calibre to convert a couple .pdfs to .txt and with a pretty simple python script got rid of page footnotes/headers/page_numbers and I just ended up with pretty decent Audiobooks.
I am curious to know if Sam has ever contributed to a technical paper, like, I am honestly curious if people like him or Musk ever contribute, formally, to the technical side of things besides publications more on the speculative/descriptive/philosophical side of things.
I doubt it, but I do believe that without Sam, OpenAI would be lost fighting "theoretical safety demons" with Ilya, as opposed to being an instrument of accelerating technological change.
Definitely, as anyone collaborating with the project like Microsoft or Satya Nadella deciding to help or not Sam Altman... we will see but I hope we as society move towards praising the actual hands-on work more than the great PR.
Elon Musk co-founded and leads Tesla, SpaceX, Neuralink and The Boring Company.
He didn't co-found Tesla, he bought it...that's just one of many examples...
He is smart, he has vision, yes, but I really doubt he's a rocket scientist as he likes to pretend. He is smart enough to pay the right people good money though.
Yeah no, definitely he is not half as smart as the actual scientits and engineers doing the hard work, yeah he might be a great PR Agent but that's it, as I was saying above I really hope we as society can evolve on to appreciating and praising more and more the actual hands-on work of people in the labs or on the fields rather than the PR agents pretending to know it all.
He also has built a cult following, any negative comments about him on most socials will get your down voted or abused. His built this narrative that any criticism of him or his products is some type of liberal left take down. Same as Trump, Rogan etc.
According to many people, he is a messiah, saving us from hurricanes, climate change, AI etc. How supporting Trump leads to good climate out comes is beyond me...
I think this is the main issue with these tools... what people are expecting of them.
We have swallowed the pill that LLMs are supposed to be AGI and all that mumbo jumbo, when they are just great tools and as such one needs to learn to use the tool the way it works and make the best of it, nobody is trying to hammer a nail with a broom and blaming the broom for not being a hammer...
To me the discussion here reads a little like: “Hah. See? It cant do everything!”. It makes me wonder if the goal is to convince each other that: yes, indeed, humans are not yet replaced.
It’s next token regression, of course it can’t truely introspect. That being said LLMs are amazing tools and o1 is yet another incremental improvement and I welcome it!
I mean, the system in which we live right now has a very particular Objective Function that optimizes for money and like in the social networks case where they realized the system shaped towards the objective function without considering the well being of its members, it optimized for screen time through extremism, hate speech and fake news... in the same way our current societies don't favor the nicest, most caritative, nor even smartest members; it favors the ones who are willing to cheat, lie, step over, dehumanize others... all for the sake of "being successful".