In the preference page, in the 'about' toolbox of an engine, add a link to the
stats page of the engine, if the engine had one or more errors.
Condition is::
reliabilities[<engine.name>].errors
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Inline styles are blocked by default with Content Security Policy (CSP). Move
the inline styles from 'new_issue.html' to::
searx/static/themes/__common__/less/new_issue.less
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
File searx/static/themes/simple/less/stats.less is not used (imported) in any
other less file. I can't say when it's usage was dropped or if it has ever been
used. ATM this file is without any usage.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
* searx.network.client.LOOP is initialized in a thread
* searx.network.__init__ imports LOOP which may happen
before the thread has initialized LOOP
This commit adds a new function "searx.network.client.get_loop()"
to fix this issue
The issue exists only in the debug log::
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.9/logging/__init__.py", line 1086, in emit
stream.write(msg + self.terminator)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 79-89: ordinal not in range(128)
Call stack:
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/searx/searx-src/searx/webapp.py", line 1316, in __call__
return self.app(environ, start_response)
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/werkzeug/middleware/proxy_fix.py", line 169, in __call__
return self.app(environ, start_response)
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/searx/searx-src/searx/webapp.py", line 766, in search
number_of_results=format_decimal(number_of_results),
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask_babel/__init__.py", line 458, in format_decimal
locale = get_locale()
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask_babel/__init__.py", line 226, in get_locale
rv = babel.locale_selector_func()
File "/usr/local/searx/searx-src/searx/webapp.py", line 249, in get_locale
logger.debug("%s uses locale `%s` from %s", request.url, locale, locale_source)
Unable to print the message and arguments - possible formatting error.
Use the traceback above to help find the error.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
- add to list of pylint scripts
- add debug log messages
- move API key int `settings.yml`
- improved readability
- add some metadata to results
Signed-off-by: Markus Heiser <markus@darmarit.de>
Springer Nature is a global publisher dedicated to providing service to research
community [1] with official API [2].
To test this PR, first get your API key following this page:
https://dev.springernature.com/signup
In searx/engines/springer.py at line 24, add this API key. I left my own key,
commented out in the line aboce. Feel free to use it, if needed.
[1] https://www.springernature.com/
[2] https://dev.springernature.com/
The new sci-hub URLs are comming from @aurora-vasiliev [1].
[1] https://github.com/searx/searx/pull/2706
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
In the EU there exists a "General Data Protection Regulation" [1] aka GDPR (BTW:
very user friendly!) which requires consent to tracking. To get the consent
from the user, youtube requests are redirected to confirm and get a CONSENT
Cookie from https://consent.youtube.com
This patch adds a CONSENT Cookie to the youtube request to avoid redirection.
[1] https://en.wikipedia.org/wiki/General_Data_Protection_Regulation
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Reported-by: https://github.com/searx/searx/issues/2774
* display the median time instead of the average.
* add a "Reliability" column (sum up the metrics and the checker results).
* the "selected language", "SafeSearch", "Time range" values are displayed as "broken" when the checker tests fail.
Some error won't stop the engine:
* additional HTTP redirects for example
* some invalid results
secondary=True allows to flag these errors as not important.
Report to the user suspended engines.
searx.search.processor.abstract:
* manages suspend time (per network).
* reports suspended time to the ResultContainer (method extend_container_if_suspended)
* adds the results to the ResultContainer (method extend_container)
* handles exceptions (method handle_exception)
Some engine do have set result.img_src, other return a result.thumbnail. If
result.img_src is unset and a result.thumbnail is given, show it to the UI.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
I also found some items missing a thumbnail and I used text_extract for content
and title, to remove unneeded whitespaces.
BTW: added bandcamp's favicon
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
settings.yml:
* outgoing.networks:
* can contains network definition
* propertiers: enable_http, verify, http2, max_connections, max_keepalive_connections,
keepalive_expiry, local_addresses, support_ipv4, support_ipv6, proxies, max_redirects, retries
* retries: 0 by default, number of times searx retries to send the HTTP request (using different IP & proxy each time)
* local_addresses can be "192.168.0.1/24" (it supports IPv6)
* support_ipv4 & support_ipv6: both True by default
see https://github.com/searx/searx/pull/1034
* each engine can define a "network" section:
* either a full network description
* either reference an existing network
* all HTTP requests of engine use the same HTTP configuration (it was not the case before, see proxy configuration in master)
This patch is an addition to PR #2656 which removed all usage of `base_url` from
the templates, except one was forgotten in the cookie URL of the preferences.
closes: 2740
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
I can't set `default_doi_resolver` in `settings.yml` if I'm using
`use_default_settings`. Searx seems to try to interpret all settings at root
level in `settings.yml` as dict, which is correct except for
`default_doi_resolver` which is at root level and a string::
File "/usr/lib/python3.9/site-packages/searx/settings_loader.py", line 125, in load_settings
update_settings(default_settings, user_settings)
File "/usr/lib/python3.9/site-packages/searx/settings_loader.py", line 61, in update_settings
update_dict(default_settings[k], v)
File "/usr/lib/python3.9/site-packages/searx/settings_loader.py", line 48, in update_dict
for k, v in user_dict.items():
AttributeError: 'str' object has no attribute 'items'
Signed-off-by: Markus Heiser <markus@darmarit.de>
Suggested-by: @0xhtml https://github.com/searx/searx/issues/2722#issuecomment-813391659
The `url_for` function in the template context is not the one from Flask, it is
the one from `webapp`. The `webapp.url_for_theme` is different from its
namesake of Flask and has it quirks, when called with argument `_external=True`.
The `webapp.url_for_theme` can't handle absolute URLs since it pokes a leading
'/', here is the snippet of the old code::
url = url_for(endpoint, **values)
if settings['server']['base_url']:
if url.startswith('/'):
url = url[1:]
url = urljoin(settings['server']['base_url'], url)
Next drawback of (Flask's) `_external=True` is, that it will not return the HTTP
scheme when searx (the Flask app) listens on http and is proxied by a https
server.
To get the right scheme `HTTP_X_SCHEME` is needed by Flask (werkzeug). Since
this is not provided in every environment (e.g. behind Apache mod_wsgi or the
HTTP header is not fully set for some other reasons) it is recommended to
get *script_name*, *server* and *scheme* from the configured `base_url`. If
`base_url` is specified, then these values from are given preference over any
Flask's generics.
BTW this patch normalize to use `url_for` in the `opensearch.xml` and drop the
need of `host` and `urljoin` in template's context.
Signed-off-by: Markus Heiser <markus@darmarit.de>
Instead of a hard-coded `oadoi.org` default, use the default value from
`settings.yml`.
Fix an issue in the themes: The replacement 'current_doi_resolver' contains the
doi_resolver_url, not the name of the DOI resolver. Compare return value of::
searx.plugins.oa_doi_rewrite.get_doi_resolver(...)
Fix a typo in `get_doi_resolver(..)`: suggested by @kvch:
*L32 should set doi_resolver not doi_resolvers*
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
fr.wikipedia.org (and it seems not other wikipedia websites),
adds HTML to api_result['displayTitle'].
(Search for '!wp :fr Braid' for example)
The commit uses api_result['title']
the json response has been changed and it contains html chunks which is
not compatible with our json engine, so we have to switch to html/xpath
parsing
The get_cliend_id() function:
* fetches https://soundcloud.com
* then fetches each referenced javascript URL to get the client id.
This commit fetches the javascript URLs in the reverse order: the client id is in the last javascript URL.
Added a line to the yacy entry to enable HTTP if the local yacy instance isn't using HTTPS. Otherwise, an error will be thrown in the logs: "No connection adapters were found for 'http://localhost:8090/yacysearch.json...'". This is likely related to ticket #2641 that forces HTTPS by default.
See https://github.com/requirejs/requirejs/issues/1816
requirejs loads one file: leaflet.
This commit:
* removes requirejs
* load leaflet using <script src...> HTML tag in searx/templates/oscar/base.html
Many things have been changed since last review of this engine. This patch fix
xpath selectors, implements suggestion and is a complete review / rewrite of the
engine.
Signed-off-by: Markus Heiser <markus@darmarit.de>
When initing engines a "SearxEngineResponseException" is logged very verbose,
including full traceback information:
ERROR:searx.engines:yggtorrent engine: Fail to initialize
Traceback (most recent call last):
File "share/searx/searx/engines/__init__.py", line 293, in engine_init
init_fn(get_engine_from_settings(engine_name))
File "share/searx/searx/engines/yggtorrent.py", line 42, in init
resp = http_get(url, allow_redirects=False)
File "share/searx/searx/poolrequests.py", line 197, in get
return request('get', url, **kwargs)
File "share/searx/searx/poolrequests.py", line 190, in request
raise_for_httperror(response)
File "share/searx/searx/raise_for_httperror.py", line 60, in raise_for_httperror
raise_for_captcha(resp)
File "share/searx/searx/raise_for_httperror.py", line 43, in raise_for_captcha
raise_for_cloudflare_captcha(resp)
File "share/searx/searx/raise_for_httperror.py", line 30, in raise_for_cloudflare_captcha
raise SearxEngineCaptchaException(message='Cloudflare CAPTCHA', suspended_time=3600 * 24 * 15)
searx.exceptions.SearxEngineCaptchaException: Cloudflare CAPTCHA, suspended_time=1296000
For SearxEngineResponseException this is not needed. Those types of exceptions
can be a normal use case. E.g. for CAPTCHA errors like shown in the example
above. It should be enough to log a warning for such issues:
WARNING:searx.engines:yggtorrent engine: Fail to initialize // Cloudflare CAPTCHA, suspended_time=1296000
closes: #2612
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The old xpath configuration for google scholar did not work and is replaced by a
python implementation.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
- unittest2 is a backport of the new features added to the unittest testing
framework in Python 2.7
- unittest2 was only needed in py2 and can be dropped now
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Bing has a list of regions that it supports and some of these regions
may have more than one possible language.
In some cases, like Switzerland, these languages are always shown as
options, so there is no issue. But in other cases, like Andorra, Bing
will only show one language at the time, either the region's default or
the request's language if the latter is supported by that region.
For example, if the HTTP request is in French, Andorra will appear as
fr-AD but if the same page is requested in any other language Andorra
will appear as ca-AD.
This is specially a problem when Bing assumes that the request is in
English because it overrides enough language codes to make several major
languages like Arabic dissappear from the languages.py file.
To avoid that issue, I set the Accept-Language header to a language
that's only supported in one region to hopefully avoid these overrides.
use a sparql request on wikidata to get the list of currencies.
currencies.json contains the translation for all supported searx languages.
Supersede #993
At the moment videos without a description are not shown - setting
default content to "" fixes this.
Another current bug is that thumbnails are not displayed. This is caused
by a double slash in the url. For this every trailing slash is now
stripped (for backwards compatibility) and the API response is correctly
parsed.
* searx understand "!ddg !g time" as : send "!g time" to DDG
* !g a DDG bang for Google: DDG return a HTTP redirect to Google
This commit adds a the allows_redirect param not to follow HTTP redirect.
The DDG engine returns a empty result as before without HTTP redirect.
on some queries (like an IT error message), wikipedia returns an HTTP error 400.
this commit returns an empty result instead of showing an error to the user.
Some JSON API returns HTML in either in the HTML or the content.
This commit adds two new parameters to the json_engine:
content_html_to_text and title_html_to_text, False by default.
If True, then the searx.utils.html_to_text removes the HTML tags.
Update crossref, openairedatasets and openairepublications engines
The duckduckgo engine requires an additional request after the results have been sent.
This commit makes sure that the second request uses the same HTTPAdapter
= the same IP address, and the same proxy.
The new version of MetaGer needs to reload the reults (into a iframe) with a
unique tag (see HTML response below).
Implementing a dedicated metager-engine for searx makes no sense to me. The
great days of MetaGer seems to be ended. I remember the good old days this
project started in the 90's of the last century. But in the last few years it
becomes more and more crap. As the name suggested, MetaGer was made for
germans in the first place. They have added a english and spain translation but
the i18n is very poor compared to what searx offers.
It's a pity, lets drop MetaGer.
This is the first response, the id (b82679980656899ba5a17ffd02a56846) is unique
for each query:
$ curl "https://metager.org/meta/meta.ger3?eingabe=foo&submit-query=&focus=web"
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<link rel="stylesheet" href="/index.css?id=b82679980656899ba5a17ffd02a56846">
<script src="/index.js?id=b82679980656899ba5a17ffd02a56846"></script>
<title>foo - MetaGer</title>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
</head>
<body>
<iframe id="mg-framed" src="https://metager.org/meta/meta.ger3?eingabe=foo&submit-query=&focus=web&mgv=b82679980656899ba5a17ffd02a56846" autofocus="true" onload="this.contentWindow.focus();"></iframe>
</body>
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Some of our interface locales include uppercase country codes,
which are separated by `_` instead of the more common `-`.
Also, a browser's `Accept-Language` header could be in lowercase.
This commit attempts to normalize those cases so a browser's
language+country codes can better match with our locales.
This solution assumes that our UI locales have nothing more than
language and optionally country. If we ever add a script specific
locale like `zh-Hant-TW` this would have to change to accomodate
that, but the idea would be pretty much the same as this fix.
The language_support variable is set to True by default,
and set to False in only 5 engines.
Except the documentation and the /config URL, this variable is not used.
This commit remove the variable definition in the engines, and
set value according to supported_languages length: False when the length is 0,
True otherwise.
Close#2485
Avoid SearxEngineXPathException errors when parsing non valid results::
.//div[@class="yuRUbf"]//a/@href index 0 not found
Traceback (most recent call last):
File "./searx/engines/google.py", line 274, in response
url = eval_xpath_getindex(result, href_xpath, 0)
File "./searx/searx/utils.py", line 608, in eval_xpath_getindex
raise SearxEngineXPathException(xpath_spec, 'index ' + str(index) + ' not found')
searx.exceptions.SearxEngineXPathException: .//div[@class="yuRUbf"]//a/@href index 0 not found
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
BTW: fix indentation by 2 spaces
The additional tests has been commented out in the google engines to not release
any CAPTCHA issues.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
BTW: make the engines ready for search.checker:
- replace eval_xpath by eval_xpath_getindex and eval_xpath_list
- google_images: remove outer try/except block
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The 'video.html' template from the 'oscar' design supports replacement
for *author* and *length*. Google-videos does not have an author, alternatively
the publisher info from is used for the *author*.
Hint: these replacements are not supported by the 'simple' design.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This revise is based on the methods developed in the revise of the google engine
(see commit 410c2f9).
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This revise is based on the methods developed in the revise of the google engine
(see commit 410c2f9).
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
the query "time" is convinient because most of the search engine will return some results,
but some engines in the general category will return documentation about the HTML tags <time> or <input type="time">
Removes module searx/brand.py and creates a namespace at searx.brand.
This patch is a first 'proof of concept'. Later we can decide to remove the
brand namespace entirely or not.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Without this commit the module searx checks the secret_key value.
With this commit, make docs, utils/standalone_searx.py,
utils/fetch_firefox_version.py works without SEARX_DEBUG=1
For reference see https://github.com/searx/searx/pull/2386
from_bang is True when the user query contains a bang.
In this case the category is also set to 'none'.
from_bang only usage was in searx.webadapter.parse_specific :
if from_bang is True, then the EngineRef category is ignored and force to 'none'.
This commit also removes the searx.webadapter.parse_sepecific function.
see searx.search.processors.abstract.EngineProcessor
First the method searx call the get_params method.
If the return value is not None, then the searx call the method search.
check HTTP response:
* detect some comme CAPTCHA challenge (no solving). In this case the engine is suspended for long a time.
* otherwise raise HTTPError as before
the check is done in poolrequests.py (was before in search.py).
update qwant, wikipedia, wikidata to use raise_for_httperror instead of raise_for_status
According to
820b468bfe/searx/engines/__init__.py (L87-L88)
an engine can have no category at all.
Without this commit, searx raise an exception in searx/results.py
Note: in this case, the engine is not shown in the preferences.
before commit 58d72f2, category was not set in xpath.py,
so searx/engines/__init__py was setting the category to ['general']
the commit 58d72f2 set the category to [] which is not replaced by searx/engines/__init__.py
consequence: the mojeek engine is hidden in the preferences.
this commit revert the xpath.py change.
close#2368
Add a new parameter "raise_for_status", set by default to True.
When True, any HTTP status code >= 300 raise an exception ( #2332 )
When False, the engine can manage the HTTP status code by itself.
- strip html tags and superfluous quotation marks from content
- remove not needed cookie from request
- remove superfluous imports
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Error pattern::
Engines cannot retrieve results:
digg (unexpected crash time data '2020-10-16T14:09:55Z' does not match format '%Y-%m-%d %H:%M:%S')
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
recoll is a local search engine based on Xapian:
http://www.lesbonscomptes.com/recoll/
By itself recoll does not offer web or API access,
this can be achieved using recoll-webui:
https://framagit.org/medoc92/recollwebui.git
This engine uses a custom 'files' result template
set `base_url` to the location where recoll-webui can be reached
set `dl_prefix` to a location where the file hierarchy as indexed by recoll can be reached
set `search_dir` to the part of the indexed file hierarchy to be searched, use an empty string to search the entire search domain
This change is backward compatible with the existing configurations.
If a settings.yml loaded from an user defined location (SEARX_SETTINGS_PATH or /etc/searx/settings.yml),
then this settings can relied on the default settings.yml with this option:
user_default_settings:True
Devian's request and response forms has been changed.
- fixed title
- fixed time_range_dict to 'popular-*-***'
- use image from <noscript> if exists
- drop obsolete "http to https, remove domain sharding"
- use query URL https://www.deviantart.com/search/deviations?page=5&q=foo
- add searx/engines/deviantart.py to pylint check (test.pylint)
Error pattern::
There DEBUG:searx:result: invalid title: {'url': 'https://www.deviantart.com/ ...
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
use
from searx.engines.duckduckgo import _fetch_supported_languages, supported_languages_url # NOQA
so it is possible to easily remove all unused import using autoflake:
autoflake --in-place --recursive --remove-all-unused-imports searx tests