> Instead of googling for the site, I google for the site's Wikipedia article ("schihub wiki") which usually has an up-to-date link to the site in the sidebar, whereas Google is forced to censor their results.
In the video, it doesn’t show this. It shows going to the scihub.idk domain. And then a redirect happens. So does this tool just host a local a domain resolver (and HTTP redirect server) for all .idk domains that does a wiki search and then responds with a HTTP redirect?
1. Make a Wikipedia search API request for the .idk domain, using the name as the article name.
2. Retrieve the rendered page contents if found.
3. Find the first Wikipedia infobox table on the page.
4. Extract the first "URL" or "Website" entry in that infobox.
5. Return the entry's value, if it's a link.
All this runs in a nickel.rs server on 127.0.0.1:80, which routes the requests as permanent redirects to the destination. Using dnsmasq,[1] if it's an .idk domain, it routes the request through the above Wikipedia resolver.
The extension could also use Wikidata [1] entries – which (AFAIK almost always) hold the data that is displayed in Wikipedia article's infobox – because then it wouldn't have to resort to parsing HTML.
Specifically, Wikidata has a "official website" property [2] that seems to be used. If there are multiple extensions, like in Sci-Hub's case [3], it could pick one based on user preferences.
In the video, it doesn’t show this. It shows going to the scihub.idk domain. And then a redirect happens. So does this tool just host a local a domain resolver (and HTTP redirect server) for all .idk domains that does a wiki search and then responds with a HTTP redirect?