Hacker Newsnew | past | comments | ask | show | jobs | submit | TheOnly92's commentslogin

An abstract generally has the following format, it starts by describing the background of the problem, the problem the paper aims to solve, the method the paper uses, and finally a conclusion. The abstract doesn't assume much prior knowledge, and can probably still be understood 10 or 20 years from now. Whereas you can see how the LLM summarized version totally skips the background and jumps straight to the problem and the method.

Now, I'm not saying there is no room for improvements. The fixed format an academic paper has with abstract and the actual paper may actually be replaced by what is shown here, and I genuinely hope to see more experimentation with the communication of scientific studies, but that is unfortunately not being focused on in the academic world.


It makes sense to debate what should be included in the abstract. Should background, problem, method or conclusion be included? My personal preference is to read the problem and method only because that’s what gives me inspiration and helps me decide whether the paper is relevant. I acknowledge everyone may have their own preference, and as mentioned in other comments, a major feature of LLMs is that you can fine-tune it using instructions to decide the level of detail that you want. But I think the main contention is that the paper authors could have done just slightly more work beyond getting the paper accepted to have the paper reach a much wider audience than their specific field.


I think it matters who we want to write papers to. Right now we write papers and abstracts to reviewers. That's because that's how we're measured and that's where we compete. But I'd say that we generally believe that papers are written to other researchers, which I agree that that should be the goal. But as this competition is increasing we're starting to write more to media as this can usually pass review and gathers lots of citations (these people tend to be from big schools too which have large media arms and are willing to pay for articles in news venues).

This is why I'm deeply frustrated with academia right now. Papers are supposed to be how I communicate to my fellow researchers working on the same or similar topic. They're not for communicating to someone in a different field and not for communicating to the public layman (nor should they be!). It is the job of science communicators to act as the bridge between laymen and researcher, which a lot do a poor job as they're beholden to the YouTube algorithm, not accuracy. Hell, Quanta published a shit piece recently about quantum wormholes and machine learning and what did they do when it was called out? Just write another article and add a note on their youtube video. Nature is pulling similar shit. I get wanting to make science popular and exciting, but truth/accuracy has a lower bound in complexity whereas fantasy doesn't.

https://www.quantamagazine.org/physicists-create-a-wormhole-...

https://www.quantamagazine.org/wormhole-experiment-called-in...

https://www.youtube.com/watch?v=uOJCS1W1uzg


I really wish they would limit abstracts to those. I feel like abstracts should be like an index/jumping off point in spirit if not in structure


I happen to be teaching a programming course currently, though it's not in English and the language I'm teaching is C. My current experience is that it does not seem like a majority of the students are using ChatGPT at all, even though I did encourage the use of it at the beginning of the course.

For my own course, I think several factors contributed to students not utilizing ChatGPT as much:

    - The assignments are not in English, and performance of ChatGPT in languages other than English is subpar.
    - The programming language that I'm teaching is C, I'd imagine Python/Javascript and other more popular languages might lead to different outcomes
    - I did specifically design the assignments so that copy/pasting the assignment to ChatGPT does not lead to a usable answer (by restricting use of certain standard library functions, making the assignment more complicated)
    - The course is not introductory, i.e. a previous course already taught the basic syntax of C and basics of programming, so I can make my assignments much more advanced
It's difficult to say if advancements in LLMs will make my job harder, where say copy/pasting my more complicated assignments can lead to correct results. But from what I can see right now, LLMs still have trouble solving novel problems, so it's probably always possible to come up with assignments that's difficult for them to solve.


I've had a few online code assessments that appear to have been hardened against ChatGPT "attacks". I failed at solving a problem that was just "Compute values of the Collatz conjecture for input n" because they wrote it to sound like an extremely difficult graph problem about being lost in a forest but being able to enter a "magic door".

I fed the problem into ChatGPT later and it was utterly unable to comprehend it, but confidently gave wrong answer after wrong answer.


Interesting, I'd suppose it makes sense that rephrasing the problem in a different way and also adding a bunch of nouns that have no relations with the problem at hand will definitely confuse LLMs. It will be interesting to see how LLMs will adapt to these as more and more of these techniques develop.

In my own assignments however, I focus less on algorithmic stuff but more on adding and mixing several things together. E.g. instead of just sorting, do group & sort, and a combination of a bunch of other practical stuff like reading big-endian binary files.


3.5 or 4? March or June model?


I've been using ChatGPT and Bard with C++ and it is quite helpful for boilerplate and reference (replacing Google/Stackoverflow).

Just for fun I asked ChatGPT 4 to calculate the RMSE between two vectors both in English and Portuguese (also translating RMSE to Portuguese) and it gave me the same code for both questions (asked in separate instances). It would be interesting to know what restrictions you applied.


It definitely depends on the task at hand, but when you're teaching programming you don't teach stuff with boilerplate. Using ChatGPT for reference to replace Google/Stackoverflow was definitely one of the ways I'd expected the students to utilize it, but it probably wasn't providing answers in ways a beginner/novice could understand.

I'd expect simple tasks like calculating RMSE to definitely be within the abilities of LLM, you might combine things like actually reading the vectors from a CSV file (or a custom format) and calculating RMSE then sorting them etc to see the limitations of LLMs. Most students have no issues with calculating RMSE, they have issues with trying to do all the other stuff that leads to it, and then the combination of sorting and other tasks.

Regarding the restrictions, most of them are just don't use itoa/strtod or strcpy or some other standard library functions.


Thanks, yes, RMSE is a simple task, I was focusing on its ability to translate the name (raiz quadrada do erro quadrático médio) correctly. It is funny that the Portuguese code has Portuguese comments but the name of the function is calculateRMSE even though I don't mention RMSE in the prompt.

I agree with you, in my experience, ChatGPT is a better search engine but it is not capable of composing the various parts of an application in a cogent manner. I also think that the current UI is not appropriate for software development and I am sure there are efforts going on to create something closer to Jupyter notebooks for programming. That may be a game changer for your students (and you).


True that based on my experience the variable names and function names remains in English despite the prompt, maybe its just the convention overall in the programming world, or maybe ChatGPT is finetuned to do so.

I don't think Jupyter notebooks or like similar REPL interfaces will help too much for my course, at least in the current syllabus. I'm aiming to teach about pointers, memory management etc, the more fundamental parts of how to interact with computers instead of a high level language. Though I would agree that the current UI is suboptimal, some improvements in allowing students to visualize memory layouts and see how their code manipulates memory will help a lot.


I'd hazard to guess that REPL of a simple virtual machine would work wonders for teaching about pointers and memory management.

I can't recall exactly but I think https://godbolt.org/ might do that for example?


A simple virtual machine might be nice, but imagine the pains of trying to guide students to install something across different environments.

Godbolt is a compiler explorer, it shows disassembly of a code but there's nothing to visualize each step in the process.


> from what I can see right now, LLMs still have trouble solving novel problems

Have you been using 3.5 or 4?


I have been using 3.5, but when I was designing the assignments, I asked a friend who had access to 4 to check it, and the code it produced was still incorrect.


I agree that perhaps the metrics are not as useful themselves, but I think you're giving too little credit to the paper where maybe some credit is due.

I think the paper is correct that there are no "emergent abilities", i.e. abilities that might suddenly appear when scale of the model is increased. And though it might not be accurate, but the paper did make some effort to formalize and I think it is a good attempt to kind of prove the point.

However as we recognize, there are still some weird discontinuities in which at one point the model is useless and suddenly it becomes very useful. This "discontinuity" IMHO is probably just perceptional, but the underlying metric is continuous.


They actually announced the length and the characters (letters/numbers) used in the password in yesterday's press conference, if you could believe it...

Many people on the internet guessed what the password probably was (city name + year).


Source? Japanese original is fine.



Well the simple way to look at this is that usually when you transact with another bank, you're required to have an account at that bank. Just as a consumer can't magically change their balance, a bank simply can't magically increase their balance at other banks. To perform increased transactions, you have to send money to that bank.


The problem is in the end, you still need to move the money from the customer's bank to yours. That can happen either directly, or through an intermediary. If you have a direct relationship with the customer's bank, then all is well and dandy, but as a risky business you'd probably have a hard time doing this kind of direct relationship. The easier alternative is to find an intermediary bank that has connections to a lot of major banks. If you can actually find an intermediary bank willing to do business with you, i.e. open a corresponding account for your business, then you can pretty much accept payments. The trouble is if other banks find out that your intermediary bank is doing business with you, they might be able to force your intermediary bank to cut off that relation. I believe Bitfinex allegedly had that issue. So in the end, unless the customer is directly handing in cash to you, you're pretty much at someone's mercy, regardless of what channel the payment is performed in. If you want to setup a new credit card brand, better make sure the customer's bank is willing to send you the funds when its time for settlement.


Pretty spot on, a hospital in Hokkaido has to stop taking in new patients due to nurses having to take leave to care for their children.

Source (jap): https://www3.nhk.or.jp/sapporo-news/20200227/7000018387.html


It says they even closed a whole section of the building, and there are cases of patients being asked to accept postponed hospitalization dates.

But there is a bit more to the story: that hospital is also the only one in the region designated for handling infectious disease outbreaks, and so in the face of the staff shortage, they are also planning to have the capacity to respond to cases of the virus.


It's around Cayman island, Caribbean sea.


It's definitely quite high quality, just some lag due to satellite communication.


Wouldn't there be a physical tether?

Satellite seems like the last thing that would be used on a submarine, but I have almost zero knowledge of the actual tech used.


AFAIK, what they're currently doing is

submarine|-------optic fibre------>mothership------satellite-----internet


Oh, seems like the fiber cable has been cut off... by prawns maybe lol.


Yes, what a bummer. It was much more fun to watch while it was broadcasting video from the bottom. https://twitter.com/niconicoen/status/348479142871834624/pho...

Now it's back on the surface again.

http://en.wikipedia.org/wiki/DSV_Shinkai_6500


This is wonderful, they're broadcasting live from the bottom of the sea!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: