Explain feature in Chrome Bookmark

Hi,

I would be great to have the explain feature available in the Chrome extension/ bookmark. I use that 99% of the time when reading on Readlang. I find the explanation, second paragraph, generated by the AI in context extremely useful. Any way to use that other then importing it to Readlang first? Would be great to have it in the extension.

3 Likes

Good suggestion. The Chrome extension has been falling behind the main readlang.com reading experience. I did play around with this but didn’t come up with a UI I was totally happy with. Should take another crack at it!

I started reading books on the main website. But I still use Reader Readwise for RSS feeds, saving down stuff from the internet, etc So being able to use Readlang on top of that is a great option!

Where can I learn more about how the translation works? For example, when I click on a word and I get the green word on top, is that done by the AI or the dictionary? Does it take into account the context?

Thanks!
Marius

I should write a blog post about this. Basically it works like this:

  • Free plan (and all plans previous to this year): The highlighted word/phrase is translated by Google Translate with no surrounding context taken into account.
  • Premium plan with “Context aware translations” enabled (see Settings page): The highlighted word/phrase is translated using ChatGPT 4o-mini and takes the whole sentence into account to provide a more appropriate translation given the context.
  • Premium Plus with “Context aware translations” enabled: Same as above but uses the full ChatGPT 4o model, which provides even more accurate results, particularly for less commonly spoken languages.

That’s awesome!

I found that reading with ChatGPT 4o model takes my entire reading experience to another level. It’s not just that it’s able to pick the right word. It’s that it’s able to provide historical context. It understands metaphors and why they are used. It shows a deep historical and literary knowledge. It’s like being tutored by a literary critic. A lot of people miss this about Gen AI.

I used Chat GPT in the past when reading but this is so much more convenient. It’s right there. And it has already read the text without me having to do anything.

Just curious, what is the text that the GenAI read when I select a word? If I have a 200,000 words book, does it read the whole thing, or just a number of words before and after the word I highlight?

Also, do you mind sharing the prompt that ChatGPT is being asked? Just out of curiosity. It’s a very good prompt.

One thing I would like to use it more for is just reading english. RIght now the way I use that is with the explanation given in English. But would be great to have a synonim in english when clicking on a word. English is my strongest language and I would prefer to not have the word translated into another language. That would also be a very good option to have when one gets strong in a foreign language. Let’s say I already have a 20,000 vocabulary in a language I’m learning. I can probably get by with explanations in that language. So, a english-english, spanish-spanish, etc option would take the reading experience to the next level.

Reading like this is trully transformational. More people should know about this.

I would recommend extending the marketing/ story/ About Readlang. This is not just a tool for language learners. This is a tool for reading at a higher level. Period. I’m using this to read old or difficult texts in my native language.

There’s a few more things that could be explored to enhance the capabilities of the tool to make it even better for reading in one’s native language. Translation in the target language is one. But how about another tab that would have a slightly different purpose. I’m thinking something similar to a commentary. Enlarge the context and prompt to explain a paragraph, or a few paragraphs or even a chapter vs a word.

The market for just readers is exponentially higher than language lerners and this tool has such a great potential for everyone. I just wish I had this in college!

1 Like

Great to hear that this is working well for you!

At the risk of divulging trade secrets :-)… If you are talking about the Explain feature, the AI gets to read the single sentence that the word you click on appears in, the same sentence that is added to the saved card as context. The prompt is quite simple and short. This is an example of the prompt that Readlang uses when explaining the Spanish word “hola” in English within the context sentence “Hola, buenas tardes.”:

> Hola, buenas tardes.

What does the Spanish word "Hola" mean in English? Explain the usual meaning as well as the meaning in this context.

I agree this would be interesting to try, but I’m afraid it’s not my main focus for the time being.

Hi Steve, big fan, are there any updates on the explain feature for the Chrome extension?

1 Like

Steve, is that question for the entire text, including the same paragraph? I got the sense that in the second paragraph of the explanation, the AI had access to more than the sentence because it is making references to text outside the sentence to explain the word in the sentence.

If I ask a question at the end of the explanation, is the text that the AI has access to the same, just the sentence?

Are there any plans to extend the context to maybe the entire paragraph?

Thank!

Thanks! There are no updates on the explain feature being added to the Chrome extension I’m afraid. It should be noted that premium users do get context-aware translations in the web reader, which does make it so that a full explanation is rarely needed. But of course an explanation can be handy and would be good to add to the Web Reader. Maybe sometime this year but no promises. In the meantime, one option if you miss the explanations is to use the “Import to Readlang” button to import the text into Readlang to read.

The AI only has access to the context sentence at the moment. The same context sentence that is saved with the word in your Word List.

Sometimes for very famous works the AI seems to be aware of more context because it knows the source material.

I have been tempted to expand the context beyond the sentence in cases where the sentence is very short. I think it’s very rare that the entire paragraph would be needed in order to disambiguate the meaning of a word or phrase which is the goal of the explain feature, so I think the tradeoff in terms of speed and cost probably isn’t worth it to include the entire paragraph.

I would encourage you to think about potentially expanding the window. Maybe add a toggle to only do it when we want to.

The latest LLM’s are extremely powerful. I’m impressed by o1. I keep hearing very smart people beling blown away by o1 and o1 pro. And I mean scholars, economists, philosophers, who get better answers from these models than they get from their peers, who are experts in their field. And not just factual answers, but reasoned arguments. These models will keep getting smarter and smarter.

Given that I do a lot of reading on Readlang, I thing there should be a way to use LLMs not just to learn words but to engage in conversations about the text. To do that, we have to give the model access to more text. The entire paragraph or even the entire chapter. We don’t need that to understand words, but we do to get the most out of the models. Which is why it should be a toggle.

Keep the premium feature but maybe add an API option since the compute cost will start to pile up quickly.

Now, I have to copy the paragraphs and paste it in the chat. I usually do it on my phone. I read on Readlang on a Boox device. I could do this in Readlang. Yes, I know this is a language learning tool. But it’s also a reading tool. And we don’t read just to learn languages. For me it’s the opposite. I acquire languages so I can read. I read with Readlang all electronic books. Why read with any other apps? And reading involves thinking. And the models are becoming more and more of an important tool.

I think this is the best example of what I’m talking about. I had an earlier post on this, but I think reading, for some of us, will evolve like this. As the models get better, we will use them more and more to understand the text and references in the text.

I haven’t got my Daylight device yet so I’m not sure if it will be able to do this with Readlang or if I have to use their own reading app fo that. But if there is something that will get me to move away from reading with Readlang in the languages that I know good enough is not being able to do this. If you build this into Readlang, then I’m a captive audience. Why use anything else?

The thing is, everything is pretty much built. Just allow us to expand the context window. Some people might want to use voice and talk to the model since it’s easier than typing on the device.

You basically already have everything you need to expand the use of Readlang. I keep coming back to this theme. Readlang has the best experience for reading a foreign language at an advanced level, which is a specific, niche use case. But it could be so much more than that. And some use cases are low hanging fruit. They would be so easy to realize. This is one of them. I would use it for everything. Talking about German grammar, about philosophical arguments Girard makes, about events, people, places mentioned in the Papyrus book I’m reading in Spanish, or how about exploring themes in Shakespeare and talking about Harold Bloom and Girard’s interpretation of Hamlet. It’s all there. And whichever app will manage to make this interaction the most seamless and hassle free will win.

Of course, we’ll also have to get access to the latest models available. Talking with o1 is so much beter than talking to 4o.

I got the Open AI Pro subscription and I’m just blown away at how good it is. Biggest drawback is cost and time it takes to compute (which hopefully will get faster over time). The cost will probably come down significantly, as we’ve seen with the new DeepSeek R1 model over the last few days.

My reading process now is reading via Readlang on a Boox Note Max e-ink reader and then taking picturs of the text displayed on the reader and import that in ChatGPT to interact about the text. That’s fairly inefficient. It would be great to access the model via API and have the model have access to what’s on the page. Daylight computer has implemented this in their OS for their device.

Another option would be to have a check box that would give the model access to the entire text I’ve read to that point, not just what’s on the page. That’s obviously more expensive and probably unnecessary for many books, but if I’m reading a recent book and the model hasn’t been trained on its full text, then that’s extremely useful.

There’s nothing like reading and interacting with the O1 Pro model about the text. The last time I’ve experienced this was at University with a good professor. A lot of people are already reading “with” the models, but there’s still alot of friction and inefficiency. Even if we have just the o1 pro and the o3 which will come out in a few weeks, they have the potential to revolutionize the way we read (though, they will likely keep getting better). Not I think it’s a matter of improving the UI and reducing friction to make them less painful to use.

You piqued my interest there. What’s it like reading books in your TL with o1? What do you discuss with it?

It’s like having an expert who spent a decade studying a book, the author and the period. It’s like having 10,000 of these experts in the vast majority of fields, at your disposal, at any time of the day, to answer any question you might have.

I ask it anything and everything, including what words mean in a context, what is the history of an idiom, what is the historical context, about the plot, characters, about the historical context of a book that might help me understand a certain aspect about it, etc. I ask it anything that comes through my head when I read. The biggest drawback is to remember that I can ask it anything, to retrain my mind to ask different types of questions that I didn’t ask before because I had no one I could ask.

There have been multiple experts out there testing the limits of the o1 and o1 pro model in their fields, advanced economic arguments, technical discussion of philosophy, etc and they’re saying that the answers are often just as good or even better than the ones given by the experts.

These are probabilistic models, so they won’t always be right. When I use it for my work, I check it every time. But it’s so good, so often, that the advantages are significantly better than the drawbacks.

I can ask it about the concept of authenticity in Heidegger’s Sein un Zeit, or why his project failed. I can explore the concept of Differance in Derrida. I can ask it to give me a Girardian interpretation of Mark Twain’s Adventures of Tom Sawyer, I can take three picturres of Picasso at the museum and ask it to do an analysis of the comon themes. The possibilities are endless. The only limit at this point is our curiosity and immagination. I can even ask it to point out aspects that are important that I might have missed. How would Ricoeur intrepret this text? How would Harold Bloom read this text and who are the authors that influenced Mark Twain. I could go on and on and on. I can spend hours exploring a text and not get bored and still learn things.

Try it with a topic you know a lot about. The more you know, the better your questions will be and the better the answers will get. And that’s where the wow factor come in. If you ask it simple generic questions about something you know little about, it’s still good. Someone who hasn’t read any Shakespeare can ask an expert basic questions and will learn a lot. But those questions will not be reflective of the knowledge of the expert. His knowledge will be revealed more and more as one spends more time with the text and can ask better questions.

I don’t think people have fully processed how ground breaking these technologies are. Some have tried GPT 3.5 or 4 and were not impressed. o1 is a different game and I suspect o3 will be even better. We’re barely scratching the surface. For for someone who’s obsessed with learning, it’s a dream.

1 Like

HudZah enjoys reading the old-fashioned way, but he now finds that he gets more out of the experience by reading alongside an AI. He puts PDFs of books into Claude or ChatGPT and then queries the books as he moves through the text. He uses Granola to listen in on meetings so that he can query an AI after the chats as well. His friend built Globe Explorer, which can instantly break down, say, the history of rockets, as if you had a professional researcher at your disposal. And, of course, HudZah has all manner of AI tools for coding and interacting with his computer via voice.