“Keep it simple”—something I keep repeating to everyone I work with. I myself have been bitten too often by the temptation to use the latest tech, even when a trusted but less “sexy” alternative was available that could do the job perfectly.
But this post could have been made a lot simpler by using SQLite. Do away with all the docker stuff, a simple file- based database will do for starters. If you need concurrent access later, it will be easy to port the DB to Postgres or similar.
Don't be so hard on him (and your fellow compatriotes). His english is perfectly fine.
Wouldn't it be weirder if he had no perceptible accent whatsoever?
I find it quite refreshing to be able to tell where someone is from, judging from their accent alone. When I was a kid, growing up in a rather rural area, I could tell with an accuracy of about 20km which area around my hometown someone I just met was from, just from by the fine differences in their accent. But in fact very few people like being identified like that.
I get it, breach of privacy and all that... but on the other hand, is it so bad to know where someone is from, when having a conversation with them
The real problem is of course the biases in people's head... "dutch people are such and such", and so on. I say we should fight the biases, not the fact that one's geographical origin can be guessed.
Not Freak_NL, but in my experience the only people actually good at recognizing the Dutch accent are Dutch people themselves (unless the accent is really bad). So in practice, it's more of a quirk than something practically bad.
This is likely the result of an overworked and underpaid academic being asked to quickly draft a few questions for the online exam tomorrow. It is definitely a sign of insufficient quality control.
Not underpaid. No experience in real life software engineering. Probably thinking of this ideal world where there is planning and then a waterfall implementation.
I think waterfall hits the problem on the head. The course and certification are the product of a waterfall process. Consider all the work to update the questions on the course: The course syllabus, training material, tests. There are processes for each of those, with stakeholders to sign off.
If you want to change a question or add a new priority, you might make one of your stakeholders angry because that was their favorite topic and they have strong opinions.
The rollout has to be planned so that people currently studying the material are not caught off guard when they take the test.
I have a lot of empathy for the people involved, but it's still a bad product.
Questions like this, where you must have attended the class to know the answer, are often introduced to penalise no-shows. It can also expose suspected contract cheating in online exams.
However, it is quite risky, since it probably wouldn’t stand in court, e.g. when a failed student sues the school.
The only reliable path to a permanent contract at a German university is to join the administration. Everything else is fixed-term, with the exception of the professorships, which are a lottery with an incredibly high ticket price.
Yep. Some states still have a permanent contract for a senior assistant. Depending on how well the professor negotiates their terms they might be lucky if they get a second one. However, this is rarely the case unless you are a well-sought after professor.
Logic reasoning is a pretty solid branch of AI since it’s inception. Robust solutions exist for most problems; even a programming language based on its principles (Prolog).
With ChatGPT there is now a system that can express the results from automatic logic reasoning in language.
The next step would be to combine the two, i.e. tell chatGPT to explain the result of a logic reasoning program in natural language. It could of course also be asked to translate a natural language query into Prolog code.
This will probably require retraining the model, but I guess the demo we are given by OpenAI leaves little doubt that this is perfectly doable.
ChatGPT has the potential to plug the gap between GOFAI and natural language, which is quite a feat.
ChatGPT is damned impressive, but I hesitate to apply the word "reasoning" to its output. It's also not clear that the backend can be generalized to be able to plug the input into a symbolic manipulation program, or to gather the output and explain it. Only OpenAI knows for certain right now. Powerful if possible though!
Here's why I can't call it "reasoning":
> what is next in the series: a,b,c
> Without additional context, it is impossible to determine what the next element in the series "a,b,c" would be. The series could potentially continue with "d," "e," or any other letter of the alphabet, depending on the pattern or rule being followed. It is also possible that the series does not follow any pattern and is simply a random or arbitrary sequence of letters. Without more information, it is impossible to determine what the next element in the series would be.
> why isn't it d?
> As mentioned in my previous response, without additional context or information about the pattern or rule being followed, it is impossible to determine what the next element in the series "a,b,c" would be. It is not necessarily true that the next element in the series would be "d," as the series could potentially continue with any other letter of the alphabet or follow a different pattern altogether. Without more information, it is impossible to determine what the next element in the series would be.
Sounds like it is trained to avoid answering questions, and instead tries to bait you to give it more information so that it can tell you what it told it.
It answers some things directly, but so many things are just avoidance. And then people say "see, it understood after a bit of back and forth, it is smart!", even though it is basically iterating through Google responses and reformulate those to fit the conversation until you say it got it right. Google v1 used pure logic and got you waht you wanted, Google v2 tries to do natural language and sometimes misses, ChatGPT is the next step and tries to do full language but misses most of the time.
My point is, there are frameworks and languages for reasoning which are mature. But they require formalised input (e.g. code) and deliver formalised output.
As a language model, chatGPT can translate back and forth between natural language and those formal languages. Part of that ability is evident in the numerous examples that demonstrate how it writes or explains code.
The version of ChatGPT that is public is quite restricted; it can‘t browse the web or run code.
A version of chatGPT that can translate a logic query I to Prolog, run the code, and translate the result back to the user should be perfectly capable of logic reasoning.
> The next step would be to combine the two, i.e. tell chatGPT to explain the result of a logic reasoning program in natural language. It could of course also be asked to translate a natural language query into Prolog code.
From what I remember, the very initial prototype of AlphaGo just had a neural net trained on historical games; effectively saying, "what kind of move would a traditional grandmaster make here?" with no planning whatsoever. This was good enough to beat the person who wrote the prototype (who wasn't a master but wasn't a complete novice either); and to make it able to defeat grandmasters, they added Markov chains for planning (which also necessitated a separate neural net for evaluating board positions).
It sounds similar to your suggestion: A model which simply generates realistic-looking sentences is accurate maybe 85% of the time; to make it truly human (or super-human), it needs to be paired with some sort of formal structure -- the analog of the Markov chain. The difficulty being, of course, that the world and its knowledge isn't as simple to represent as a go board.
That said, making coding answers more reliable, by adding a logical structure explicitly designed to support search & testing, should be within reach.
> The difficulty being, of course, that the world and its knowledge isn't as simple to represent as a go board.
Humans suffer from the exact same limitation. The limit to correct inference and prediction is often the amount and quality of input data.
A language model that can extract information from text and interact with the user to refine and clarify that information could be tremendously useful for experts who understand how the model works.
Without that understanding it will be rather disappointing though, as we see with some of the reactions to chatGPT and also Galactica (RIP).
University of Hertfordshire, greater London area, UK, Research Fellow, data science/machine learning, tinkering with electronics and robots encouraged, part-remote possible if working from within the UK. International applicants welcome.
But this post could have been made a lot simpler by using SQLite. Do away with all the docker stuff, a simple file- based database will do for starters. If you need concurrent access later, it will be easy to port the DB to Postgres or similar.