GPT-5 coming soon?

Will (has?) Readlang be moving to the newest model? Just curious.

It has, I mean GPT has. Not sure if Readlang has.
If GPT-5 has been released for your region, you can select it rather than the default 4…

Not sure if allowed, but I am going a bit off-topic now, … I completely stopped using chatGPT since it went to 4 and real-time internet seeking for obvious reasons, as people ask questions on “latest and recent” matters. 3.5 was better as it was built on reliable sources only. By accessing the internet live, it makes again errors as it finds all sorts of rubbish there.

So I use Deepseek and in all aspects of replying/explanations it’s better that all the others on the battle at this moment. (and who needs “faked” AI images from GPT which still can’t write text properly in them) and Gemini is bad in continuing an ongoing chat - it takes your new input too much as a new direct command without taking into account what was said before)

I wonder if Steve has evaluated this Deepseek in depth enough, or there may be technical/financial/regional reasons for preferring GPT…

And R2 of Deepseek will be there any moment now. It is said to overwhelm even more than R1 did :wink: we’ll see.

Also makes economic sense as GPT5 and GPT5-mini API costs are much cheaper than GPT4o and 4o-mini.

1 Like

I’m sure I’ll add it eventually. I’ve been playing around with it and it behaves a bit differently to the previous models. Here’s what I’ve learned so far:

  • it’s noticeably slower than the gpt-4 models. Even with reasoning set to “minimal”, requests for translations are taking about 2s instead of about 700ms with the gpt-4 models
  • it doesn’t have a “temperature” parameter and apparently isn’t as good at creative writing tasks. This is really only an issue for the Story Bot and Conversation features
  • the API (the way my code interacts with it) is different, so it’s not quite as simple as just changing the model name, so I’ll need to do a bit of work to alter the code to support it, particularly if I want to keep the “streaming” feature, where the explanation in the sidebar gradually appears on screen as it’s being generated

I’ll revisit this regularly to see when the latency improves, and then will do some more testing to see if it’s giving good responses for all the different features across different languages before switching to it.

2 Likes

I appreciate your being careful with this.