Best practice for using LLMs

The users of Readlang and this forum are a phenomenal group of people.

I was wandering how other people use LLMs. Maybe we could start a thread and share how we use them. These models get more powerful every day and people are finding new ways to use them? What questions do you ask them? For what type of applications? What resources have you found useful to learn how to use them better?

We could all share insights and learn from each other how to use them better.

5 Likes

Exploring art in a museum:

2 Likes

I share your sentiment about Readlang’s community.

Personally, I currently use LLMs for reading, language learning, legal breakdowns and advice (that I take with a huge grain of salt, of course), cooking recipes based on the ingredients I have, and for assistance with coding. If a year ago I barely used ChatGPT, now it’s a daily occurrence basically. Think the trend has established itself there, so I’m pretty sure I will discover more ways to integrate this technology into my life in the future.

3 Likes

Love the idea of using it to explore art in a museum!

I use it for so many things! I now use ChatGPT more than I use Google search.

  • Getting definitions and explanations of stuff I hear in podcasts which I don’t understand
  • Cooking recipes
  • Write a command for macOS or Linux terminal
  • Explaining Spanish tax laws and procedures (need to take with a grain of salt as other poster mentioned above!)
  • Asking for a brief history or summary of a topic, person or book. Useful when I’m reading something about history or current affairs.
  • Basic programming tasks (I use the o1 model for this). e.g. write a small script or self contained module. (It’s still not good enough to reliably work on a large existing codebase though)
  • Ideas for vacations - where to go and what to do
  • Showing it images of things and asking what it is. e.g. recently I showed it a photo of a halogen light bulb I needed to replace and confirmed that it was a GU10
  • Getting it to correct my writing in Spanish for email or text message interactions

I’ve also used the ChatGPT API for various things:

  • to extract structured data from PDF invoices to speed up my accounting
  • Readlang! (obviously)

Interested to hear other uses that people have!

3 Likes

On o1 pro

https://marginalrevolution.com/marginalrevolution/2025/02/o1-pro.html

This applies to ALLof its output as it is still prone – by design – to random confabulation…it is just becoming more difficult to detect as we (naively) trust more complex tasks to it while populating the internet with the embedded falsehoods of its output. Deep Bullshit would be a better name right now rather than Deep Research.

Deep Research, Deep Bullshit, and the potential (model) collapse of science

How is this different than what people do? Just turn on your financial new TV.

We should always be skeptical and if the information is important and consequential, we should always check it.

These are statistical and probabilistic models. So, if a model has a 95% probability of being right, that means it’s wrong 5% of the time. Is that bull shit? If a medical diagnostic test is right 95% of the time, do we call the incorrect diagnosis bullshit and stop using the test?

Understanding that they are probabilistic means that we don’t fully trust them, but also that we don’t throw them out.

I’ve been playing a lot with the o1 pro and halucinations have come down significantly because it uses a lot of compute to verify the information.

If I ask it very hard questions it will ocasionally make mistakes, but I would argue a human would make even more given those same questions. So what is the bar?

O1 pro for example cites its resourcesh. Here’s one regarding a question about Eleusian Mysteries. This is not random bullshit from the internet.

References: – Burkert, W. Ancient Mystery Cults. Harvard University Press, 1987.
– Mylonas, G. E. Eleusis and the Eleusinian Mysteries. Princeton University Press, 1961.
– Plato, Republic, Phaedo, Symposium (Loeb Classical Library or other critical editions).
– Cicero, De Legibus II.36 (on praise of the Eleusinian rites).

The way to test these models is to ask them hard questions about what you already know. Test them in a language you know or about a text you know well. If given proper context, I would argue they are more right than people.

Again, o1 pro has come a long way. To cite Tyler Cowen in the post above, if you haven’t played with o1 pro your views about what these models can do are outdated.

I’m using Deep research. I find it extremely useful for certain tasks. It saves me time. And the beauty is that I can actually check its work. And I do. I can spend hours googleling around. Or minutes checking its sources.

The point remains though that actors who want to spread disinformation will do it. But is that new? Dissinformation and missinformation have been around for thosands of years. The internet has been full of it for a long time. Yet that didn’t stop Wikipedia from being very effective as accurate as Encyclopedia Britanica.

Don’t ask DeepSeek questions about China if you want an accurate answer. That might be true of OpenAI for other topics. Yet, we have multiple competing models. And we’ll get even more.

I think we’ll have competition for getting more accurate with these models. It will work like the press. A reputable newspaper that has a discerning audience is scared of publishing false stories because they will lose subscribers. And its competitors will keep it honest.

We will still have quality resources, Oxford Handbooks, Roudledge Encyclopedias, etc. How do those texts become authoritative? How does knowledge build? Through a peer review process, scientific method, etc. Recurisve models are applying the same concepts.

Take it with a grain of salt, always check the work, but keep your eyes open. The changes are real. Or you can discount it a miss out. There are plenty of people who still look up words in a paper dictionary. My local bookstore has a full shelf of them.

I use it for so many things! I’ve tried a few different models, but for a while, Gemini was my go-to for cross-checking my language learning—it felt like having a patient study buddy. I also relied on it to organize my endless lists for reading, gaming, shows to watch, and even coding projects. (Confession: I’d ask it to translate terminal commands between systems, and it saved me so much frustration!)

Lately, though, I’ve been gravitating toward DeepSeek—controversial, I know! But the language flow feels more natural to me, almost conversational? It’s helped me refine my prompts in a way that just clicks. The responses are clearer, which either gives me exactly what I need or inspires me to tweak my questions until everything aligns. It’s like… the better the tool understands me, the better I learn to communicate. A little meta, but it works!

1 Like

https://marginalrevolution.com/marginalrevolution/2025/02/deep-research.html?utm_source=rss&utm_medium=rss&utm_campaign=deep-research&__readwiseLocation=