People don't like to type boilerplate, so they build wrappers that encapsulate code at the correct level of abstraction to do things easier. ggplot2 and dplyr are some outstanding examples.

But #LLMs don't care. In fact, the incentive might be the complete opposite: as more code generated with #LLM means companies get to charge for more tokens and they get to say bullshit like "50% of our code is generated with #AI".

People don't like to type boilerplate, so they build wrappers that encapsulate code at the correct level of abstraction to do things easier. ggplot2 and dplyr are some outstanding examples.

But #LLMs don't care. In fact, the incentive might be the complete opposite: as more code generated with #LLM means companies get to charge for more tokens and they get to say bullshit like "50% of our code is generated with #AI".

@eliocamp@mastodon.social

I had a similar experience with one of my students in my bioinformatics lecture a few months ago. The task was to plot something (xy-plot). The student approached me, quite proud of her code, and asked my how to improve it. I asked here if it was written with an AI, which she denied initially but later sad that she had some help. This is kind of OK. I showed here that the 17 LOC could be exchanged be a single LOC with > matplot(...). She did not critically question her initial solution.