Two different threads ( = two different user queries) can call the request
function in a row and then the response function. The namespace will be same
since this is the same engine.
To keep exactly the same value ``base_url`` must be stored in params and then
retrieve using ``resp.search_params["base_url"]``.
Suggested-by: @dalf https://github.com/searxng/searxng/pull/862#discussion_r799324861
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
can replace filtron:
* rate limite the number of request per IP and per (IP, User-Agent)
* block some bots
use Redis
data stored in Redis never contains the IP addresses, only HMAC using the secret_key
Co-authored-by: Markus Heiser <markus.heiser@darmarit.de>
Other optional parameter ..
`&sort=crawl_date`
can be appended to search_string to sort results by date.
`&domain=example.org`
can be implemented to search_string to get results from just one domain.
Public instances could get relatively fast timed-out for 3600s.
--
Merged from @allendema's commit [1] and slightly modfied / see [2].
Related-to: [1] 455b2b4460
Related-to: [2] https://github.com/searx/searx/pull/3040
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The implementation of the etools engine is poor. No date-range support, no
language support and it is broken by a CAPTCHA.
etools is a metasearch engine, the major search engines it supports (google,
bing, wikipedia, Yahoo) are already available in SeaarXNG.
While etools does support several engines we currently don't support directly,
support for them should be added directly to SearXNG if there is demand.
In practice: in SearXNG the worse etools results will be mixed with good results
from other engines we have (as long as there is no captcha).
At best case, what we win with etools is in e.g. results from de.ask.com in a
query from a german request .. in all other cases worse results are bubble up in
SearXNG's result list.
[1] https://github.com/searxng/searxng/issues/696#issuecomment-1005855499
Closes: https://github.com/searxng/searxng/issues/696
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
* allow not to record metrics (response time, etc...)
* this commit doesn't change the UI. If the metrics are disabled
/stats and /stats/errors will return empty response.
in /preferences, the columns response time and reliability will be empty.
The tab icon names are currently hard coded in the templates.
This commit lets us introduce an icon property in the future, e.g:
categories_as_tabs:
general:
icon: search-outline
These dictionaries are no longer part of the general category,
so they're no longer queried by default -> we can enable them
by default without degrading general query performance.
The general category is the category that is searched by default.
From a privacy standpoint it doesn't make sense to send all general
queries to specialized search engines that cannot deal with those
queries anyway.
Add a redis connector, the default DB connector is a socket at::
unix:///usr/local/searxng-redis/run/redis.sock?db=0
To set up a redis instance simply use::
$ ./manage redis.build
$ sudo -H ./manage redis.install
A hint for developers:
To get access rights to this instance, your developer account needs to be added
to the *searxng-redis* group::
$ sudo -H ./manage redis.addgrp "${USER}"
# don't forget to logout & login to get member of group
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Previously all categories were displayed as search engine tabs.
This commit changes that so that only the categories listed under
categories_as_tabs in settings.yml are displayed.
This lets us introduce more categories without cluttering up the UI.
Categories not displayed as tabs can still be searched with !bangs.
Since digg no longer works, we do nat have a active engine in the social-media
category. Enable reddit by default to have at least one engine back in this
category.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
* disable by default
* settings.yml: ui.query_in_title
* in /preferences: privacy tab
when enabled, the result page's title contains the user query.
previously:
* oscar theme: the query was always included
* simple theme: the query was included with the GET method
The key of the dictionary 'searx.data.ENGINES_LANGUAGES' is the *engine name*
configured in settings.xml. When multiple engines are configured to use the
same origin engine (e.g. `engine: google`)::
- name: google
engine: google
use_mobile_ui: false
...
- name: google italian
engine: google
use_mobile_ui: false
language: it
...
- name: google mobile ui
engine: google
shortcut: gomui
use_mobile_ui: true
There exists no entry for ENGINES_LANGUAGES[engine.name] (e.g. `name: google
mobile ui` or `name: google italian`). This issue can be solved by recreate the
ENGINES_LANGUAGES::
make data.languages
But this is nothing an SearXNG admin would like to do when just configuring
additional engines, since this just doubles entries in ENGINES_LANGUAGES and
BTW: `make data.languages` has various external requirements which might be not
installed or not available, on a production host.
With this patch, if engine.name fails, ENGINES_LANGUAGES[engine.engine] is used
to get the engine.supported_languages (e.g. `google` for the engine named
`google mobile`).
For an engine, when there is `language: ...` in the YAML settings, the engine
supports only one language, in this case engine.supported_languages should
contains this value defined in settings.yml (e.g. `it` for the engine named
`google italian`).
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Closes: https://github.com/searxng/searxng/issues/384
Suggestions should be added too.
suggestion_xpath: //div[@class="text-gray h6"]/a
You can try it with:
!brave recurzuoin
Suggested-by: @allendema in https://github.com/searx/searx/issues/2857#issuecomment-904837023
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This commit remove the need to update the brand for GIT_URL and GIT_BRANCH:
there are read from the git repository.
It is possible to call python -m searx.version freeze to freeze the current version.
Useful when the code is installed outside git (distro package, docker, etc...)
Deny formats has been implemented in 6ed4616d.
To harden SearXNG instances by default, other formats than HTML should be
denied. Most of JSON, RSS and CSV requests are bots [1]::
Bots are the only users of this feature on a public instance, and they abuse
it too much that the engines rate limit pretty quickly the IP address of the
instance.
[1] https://github.com/searxng/searxng/issues/95
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Do not merge this patch in master branch of SearXNG! This branch exists only
for testing the feature branch fix-searx.sh @return42.
This patch changes the buildenv to::
GIT_URL='https://github.com/return42/searxng'
GIT_BRANCH='fix-searx.sh'
SEARX_PORT='7777'
SEARX_BIND_ADDRESS='127.0.0.12'
To test installation procedure, clone feature branch (fix-searx.sh)::
$ cd ~/Downloads
$ git clone --branch fix-searx.sh https://github.com/return42/searxng searxng
$ cd searxng
$ ./utils/searx.sh install all
...
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Not all settings from the 'brand:' section of the YAML files are needed in the
shell scripts. This patch reduce the variables in ./utils/brand.env to what is
needed. The following ('brand:' settings) can be removed from this file:
- ISSUE_URL
- DOCS_URL
- PUBLIC_INSTANCES
- WIKI_URL
Tasks running outside of an *installed instance*, need the following settings
from the YAML configuration:
- GIT_URL <--> brand.git_url
- GIT_BRANCH <--> brand.git_branch
- SEARX_URL <--> server.base_url (aka PUBLIC_URL)
- SEARX_PORT <--> server.port
- SEARX_BIND_ADDRESS <--> server.bind_address
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
modified docs/admin/engines/settings.rst
- Fix documentation and add section 'brand'.
- Add remarks about **buildenv** variables.
- Add remarks about settings from environment variables $SEARX_DEBUG,
$SEARX_PORT, $SEARX_BIND_ADDRESS and $SEARX_SECRET
modified docs/admin/installation-searx.rst & docs/build-templates/searx.rst
Fix template location /templates/etc/searx/settings.yml
modified docs/dev/makefile.rst
Add description of the 'make buildenv' target and describe
- we have all SearXNG setups are centralized in the settings.yml file
- why some tasks need a utils/brand.env (aka instance's buildenv)
modified manage
Settings file from repository's working tree are used by default and
ask user if a /etc/searx/settings.yml file exists.
modified searx/settings.yml
Add comments about when it is needed to run 'make buildenv'
modified searx/settings_defaults.py
Default for server:port is taken from enviroment variable SEARX_PORT.
modified utils/build_env.py
- Some defaults in the settings.yml are taken from the environment,
e.g. SEARX_BIND_ADDRESS (searx.settings_defaults.SHEMA). When the
'brand.env' file is created these enviroment variables should be
unset first.
- The CONTACT_URL enviroment is not needed in the utils/brand.env
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Qwant is a fast and reliable search engine and AFAIK there is no CAPTCHA. Let
us enable Qwant engines by default.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The implementation uses the Qwant API (https://api.qwant.com/v3). The API is
undocumented but can be reverse engineered by reading the network log of
https://www.qwant.com/ queries.
This implementation is used by different qwant engines in the settings.yml::
- name: qwant
categories: general
...
- name: qwant news
categories: news
...
- name: qwant images
categories: images
...
- name: qwant videos
categories: videos
...
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The engine was added in commit a4b07460 but now it shows new issues [1].
In the 90'th of the last century, dogpile had its own WEB index, but nowadays it
is a meta-search engine [2]
Powered by technology, Dogpile returns all the best results from leading
search engines including Google and Yahoo!
Using dogpile as an engine in SearXNG needs more investigation, a XPath solution
like we have is not enough. It is questionable whether it still makes sense to
investigate more into a meta-search engine with a ReCAPTCHA in front.
With this patch the dogpile engine is removed
[1] https://github.com/searxng/searxng/issues/202
[2] https://www.dogpile.com/support/aboutus
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Engine just for Podcasts. An API which returns Podcasts and their Info like:
website, author etc.
Upstream query example: https://gpodder.net/search.json?q=linux
Added synonyme.woxikon.de using the xpath engine. Adds a site which returns
word synonyms although just in German.
Depending on the query not all synonyms are shown because of not the best xpath
selection. But should do the job just fine.
Upstream example query: https://synonyme.woxikon.de/synonyme/test.php
BTW add about section to the YAML configuration
It now shows descriptions with their correct URLs when there are videos in the
search results, pulling content_xpath from snippet-description instead of
snippet-content.
Suggested-by: @eagle-dogtooth https://github.com/searx/searx/issues/2857#issuecomment-869119968
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
To compress saved preferences in the URL was introduced in 5f758b2d3 and
slightly fixed in 8f4401462. But the main fail was not fixed; The decompress
function returns a binary string and this binary should first be decoded to a
string before it is passed to urllib.parse_qs.
BTW: revert the hot-fix from 5973491
Related-to: https://github.com/searxng/searxng/issues/166
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Slightly modified merge of commit [1cb1d3ac] from searx [PR 2543]:
This adds Docker Hub .. as a search engine .. the engine's favicon was
downloaded from the Docker Hub website with wget and converted to a PNG
with ImageMagick .. It supports the parsing of URLs, titles, content,
published dates, and thumbnails of Docker images.
[1cb1d3ac] https://github.com/searx/searx/pull/2543/commits/1cb1d3ac
[PR 2543] https://github.com/searx/searx/pull/2543
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Access to formats can be denied by settings configuration::
search:
formats: [html, csv, json, rss]
Closes: https://github.com/searxng/searxng/issues/95
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
To test & demonstrate this implementation download:
https://liste.mediathekview.de/filmliste-v2.db.bz2
and unpack into searx/data/filmliste-v2.db, in your settings.yml define a sqlite
engine named "demo"::
- name : demo
engine : sqlite
shortcut: demo
categories: general
result_template: default.html
database : searx/data/filmliste-v2.db
query_str : >-
SELECT title || ' (' || time(duration, 'unixepoch') || ')' AS title,
COALESCE( NULLIF(url_video_hd,''), NULLIF(url_video_sd,''), url_video) AS url,
description AS content
FROM film
WHERE title LIKE :wildcard OR description LIKE :wildcard
ORDER BY duration DESC
disabled : False
Query to test: "!demo concert"
This is a rewrite of the implementation from commit [1]
[1] searx/searx@8e90a21
Suggested-by: @virtadpt searx/searx#2808
* [mod] option to enable or disable "proxy" button next to each result
Closes: https://github.com/searxng/searxng/issues/51
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Co-authored-by: Alexandre Flament <alex@al-f.net>
Springer Nature is a global publisher dedicated to providing service to research
community [1] with official API [2].
To test this PR, first get your API key following this page:
https://dev.springernature.com/signup
In searx/engines/springer.py at line 24, add this API key. I left my own key,
commented out in the line aboce. Feel free to use it, if needed.
[1] https://www.springernature.com/
[2] https://dev.springernature.com/
The new sci-hub URLs are comming from @aurora-vasiliev [1].
[1] https://github.com/searx/searx/pull/2706
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
settings.yml:
* outgoing.networks:
* can contains network definition
* propertiers: enable_http, verify, http2, max_connections, max_keepalive_connections,
keepalive_expiry, local_addresses, support_ipv4, support_ipv6, proxies, max_redirects, retries
* retries: 0 by default, number of times searx retries to send the HTTP request (using different IP & proxy each time)
* local_addresses can be "192.168.0.1/24" (it supports IPv6)
* support_ipv4 & support_ipv6: both True by default
see https://github.com/searx/searx/pull/1034
* each engine can define a "network" section:
* either a full network description
* either reference an existing network
* all HTTP requests of engine use the same HTTP configuration (it was not the case before, see proxy configuration in master)
Instead of a hard-coded `oadoi.org` default, use the default value from
`settings.yml`.
Fix an issue in the themes: The replacement 'current_doi_resolver' contains the
doi_resolver_url, not the name of the DOI resolver. Compare return value of::
searx.plugins.oa_doi_rewrite.get_doi_resolver(...)
Fix a typo in `get_doi_resolver(..)`: suggested by @kvch:
*L32 should set doi_resolver not doi_resolvers*
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
the json response has been changed and it contains html chunks which is
not compatible with our json engine, so we have to switch to html/xpath
parsing
Added a line to the yacy entry to enable HTTP if the local yacy instance isn't using HTTPS. Otherwise, an error will be thrown in the logs: "No connection adapters were found for 'http://localhost:8090/yacysearch.json...'". This is likely related to ticket #2641 that forces HTTPS by default.
The old xpath configuration for google scholar did not work and is replaced by a
python implementation.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Some JSON API returns HTML in either in the HTML or the content.
This commit adds two new parameters to the json_engine:
content_html_to_text and title_html_to_text, False by default.
If True, then the searx.utils.html_to_text removes the HTML tags.
Update crossref, openairedatasets and openairepublications engines
The new version of MetaGer needs to reload the reults (into a iframe) with a
unique tag (see HTML response below).
Implementing a dedicated metager-engine for searx makes no sense to me. The
great days of MetaGer seems to be ended. I remember the good old days this
project started in the 90's of the last century. But in the last few years it
becomes more and more crap. As the name suggested, MetaGer was made for
germans in the first place. They have added a english and spain translation but
the i18n is very poor compared to what searx offers.
It's a pity, lets drop MetaGer.
This is the first response, the id (b82679980656899ba5a17ffd02a56846) is unique
for each query:
$ curl "https://metager.org/meta/meta.ger3?eingabe=foo&submit-query=&focus=web"
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<link rel="stylesheet" href="/index.css?id=b82679980656899ba5a17ffd02a56846">
<script src="/index.js?id=b82679980656899ba5a17ffd02a56846"></script>
<title>foo - MetaGer</title>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
</head>
<body>
<iframe id="mg-framed" src="https://metager.org/meta/meta.ger3?eingabe=foo&submit-query=&focus=web&mgv=b82679980656899ba5a17ffd02a56846" autofocus="true" onload="this.contentWindow.focus();"></iframe>
</body>
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
BTW: fix indentation by 2 spaces
The additional tests has been commented out in the google engines to not release
any CAPTCHA issues.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
check HTTP response:
* detect some comme CAPTCHA challenge (no solving). In this case the engine is suspended for long a time.
* otherwise raise HTTPError as before
the check is done in poolrequests.py (was before in search.py).
update qwant, wikipedia, wikidata to use raise_for_httperror instead of raise_for_status
recoll is a local search engine based on Xapian:
http://www.lesbonscomptes.com/recoll/
By itself recoll does not offer web or API access,
this can be achieved using recoll-webui:
https://framagit.org/medoc92/recollwebui.git
This engine uses a custom 'files' result template
set `base_url` to the location where recoll-webui can be reached
set `dl_prefix` to a location where the file hierarchy as indexed by recoll can be reached
set `search_dir` to the part of the indexed file hierarchy to be searched, use an empty string to search the entire search domain
Xpath engine and results template changed to account for the fact that
archive.org doesn't cache .onions, though some onion engines migth have
their own cache.
Disabled by default. Can be enabled by setting the SOCKS proxies to
wherever Tor is listening and setting using_tor_proxy as True.
Requires Tor and updating packages.
To avoid manually adding the timeout on each engine, you can set
extra_proxy_timeout to account for Tor's (or whatever proxy used) extra
time.
A new "base" engine called command is introduced. It is the foundation for all command line engines for now.
You can use this engine to create your own command line engine.
Add some engines (commented out to make sure no one enables anything accidentally):
* git grep: This engine lets you grep in the searx repo.
* locate: If locate is installed and initialized, you can search on the FS.
* find: You can find files with a specific name from where you started searx.
* pattern search in files: This engine utilizes the command fgrep.
* regex search in files: This engine runs `grep` to find a file based on its contents.
Sending queries through POST, while better for privacy, breaks functionality
with certain extensions (e.g. Firefox containers). Since Firefox does
not send cookies when requesting `/opensearch.xml`, users cannot easily
switch to GET on the client side unless they make a custom search
engine. This commit allows admins to modify the default method on their
side so they can set it to GET if needed.
- enabling HTTPS for sci-hub.tw by default
- making sci-hub the default DOI resolver as it has the largest collection of scientific articles.
- replaced doai.io with dissem.in, as it redirects to this new domain.
Co-authored-by: Aurora of Earth <auroraofearth@ya.ru>