it prepares the new architecture change,
everything about multithreading in moved in the searx.search.* packages
previously the call to the "init" function of the engines was done in searx.engines:
* the network was not set (request not sent using the defined proxy)
* it requires to monkey patch the code to avoid HTTP requests during the tests
* [mod] option to enable or disable "proxy" button next to each result
Closes: https://github.com/searxng/searxng/issues/51
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Co-authored-by: Alexandre Flament <alex@al-f.net>
When there is at least one errors or one failed checker test:
* the warning icon is displayed in the reliability column
* the link "View error logs and submit a bug report" is displayed on engine name tooltip.
Before:
* the warning icon was displayed only when one or more checker test(s) failed.
* the link "View error logs and submit a bug report" was not shown when a checker test failed but there were no error.
In the preference page, in the 'about' toolbox of an engine, add a link to the
stats page of the engine, if the engine had one or more errors.
Condition is::
reliabilities[<engine.name>].errors
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Inline styles are blocked by default with Content Security Policy (CSP). Move
the inline styles from 'new_issue.html' to::
searx/static/themes/__common__/less/new_issue.less
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
File searx/static/themes/simple/less/stats.less is not used (imported) in any
other less file. I can't say when it's usage was dropped or if it has ever been
used. ATM this file is without any usage.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
* searx.network.client.LOOP is initialized in a thread
* searx.network.__init__ imports LOOP which may happen
before the thread has initialized LOOP
This commit adds a new function "searx.network.client.get_loop()"
to fix this issue
The issue exists only in the debug log::
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.9/logging/__init__.py", line 1086, in emit
stream.write(msg + self.terminator)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 79-89: ordinal not in range(128)
Call stack:
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/searx/searx-src/searx/webapp.py", line 1316, in __call__
return self.app(environ, start_response)
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/werkzeug/middleware/proxy_fix.py", line 169, in __call__
return self.app(environ, start_response)
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/searx/searx-src/searx/webapp.py", line 766, in search
number_of_results=format_decimal(number_of_results),
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask_babel/__init__.py", line 458, in format_decimal
locale = get_locale()
File "/usr/local/searx/searx-pyenv/lib/python3.9/site-packages/flask_babel/__init__.py", line 226, in get_locale
rv = babel.locale_selector_func()
File "/usr/local/searx/searx-src/searx/webapp.py", line 249, in get_locale
logger.debug("%s uses locale `%s` from %s", request.url, locale, locale_source)
Unable to print the message and arguments - possible formatting error.
Use the traceback above to help find the error.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
- add to list of pylint scripts
- add debug log messages
- move API key int `settings.yml`
- improved readability
- add some metadata to results
Signed-off-by: Markus Heiser <markus@darmarit.de>
Springer Nature is a global publisher dedicated to providing service to research
community [1] with official API [2].
To test this PR, first get your API key following this page:
https://dev.springernature.com/signup
In searx/engines/springer.py at line 24, add this API key. I left my own key,
commented out in the line aboce. Feel free to use it, if needed.
[1] https://www.springernature.com/
[2] https://dev.springernature.com/
The new sci-hub URLs are comming from @aurora-vasiliev [1].
[1] https://github.com/searx/searx/pull/2706
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
In the EU there exists a "General Data Protection Regulation" [1] aka GDPR (BTW:
very user friendly!) which requires consent to tracking. To get the consent
from the user, youtube requests are redirected to confirm and get a CONSENT
Cookie from https://consent.youtube.com
This patch adds a CONSENT Cookie to the youtube request to avoid redirection.
[1] https://en.wikipedia.org/wiki/General_Data_Protection_Regulation
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Reported-by: https://github.com/searx/searx/issues/2774
* display the median time instead of the average.
* add a "Reliability" column (sum up the metrics and the checker results).
* the "selected language", "SafeSearch", "Time range" values are displayed as "broken" when the checker tests fail.
Some error won't stop the engine:
* additional HTTP redirects for example
* some invalid results
secondary=True allows to flag these errors as not important.
Report to the user suspended engines.
searx.search.processor.abstract:
* manages suspend time (per network).
* reports suspended time to the ResultContainer (method extend_container_if_suspended)
* adds the results to the ResultContainer (method extend_container)
* handles exceptions (method handle_exception)
Some engine do have set result.img_src, other return a result.thumbnail. If
result.img_src is unset and a result.thumbnail is given, show it to the UI.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
I also found some items missing a thumbnail and I used text_extract for content
and title, to remove unneeded whitespaces.
BTW: added bandcamp's favicon
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
settings.yml:
* outgoing.networks:
* can contains network definition
* propertiers: enable_http, verify, http2, max_connections, max_keepalive_connections,
keepalive_expiry, local_addresses, support_ipv4, support_ipv6, proxies, max_redirects, retries
* retries: 0 by default, number of times searx retries to send the HTTP request (using different IP & proxy each time)
* local_addresses can be "192.168.0.1/24" (it supports IPv6)
* support_ipv4 & support_ipv6: both True by default
see https://github.com/searx/searx/pull/1034
* each engine can define a "network" section:
* either a full network description
* either reference an existing network
* all HTTP requests of engine use the same HTTP configuration (it was not the case before, see proxy configuration in master)
This patch is an addition to PR #2656 which removed all usage of `base_url` from
the templates, except one was forgotten in the cookie URL of the preferences.
closes: 2740
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
I can't set `default_doi_resolver` in `settings.yml` if I'm using
`use_default_settings`. Searx seems to try to interpret all settings at root
level in `settings.yml` as dict, which is correct except for
`default_doi_resolver` which is at root level and a string::
File "/usr/lib/python3.9/site-packages/searx/settings_loader.py", line 125, in load_settings
update_settings(default_settings, user_settings)
File "/usr/lib/python3.9/site-packages/searx/settings_loader.py", line 61, in update_settings
update_dict(default_settings[k], v)
File "/usr/lib/python3.9/site-packages/searx/settings_loader.py", line 48, in update_dict
for k, v in user_dict.items():
AttributeError: 'str' object has no attribute 'items'
Signed-off-by: Markus Heiser <markus@darmarit.de>
Suggested-by: @0xhtml https://github.com/searx/searx/issues/2722#issuecomment-813391659
The `url_for` function in the template context is not the one from Flask, it is
the one from `webapp`. The `webapp.url_for_theme` is different from its
namesake of Flask and has it quirks, when called with argument `_external=True`.
The `webapp.url_for_theme` can't handle absolute URLs since it pokes a leading
'/', here is the snippet of the old code::
url = url_for(endpoint, **values)
if settings['server']['base_url']:
if url.startswith('/'):
url = url[1:]
url = urljoin(settings['server']['base_url'], url)
Next drawback of (Flask's) `_external=True` is, that it will not return the HTTP
scheme when searx (the Flask app) listens on http and is proxied by a https
server.
To get the right scheme `HTTP_X_SCHEME` is needed by Flask (werkzeug). Since
this is not provided in every environment (e.g. behind Apache mod_wsgi or the
HTTP header is not fully set for some other reasons) it is recommended to
get *script_name*, *server* and *scheme* from the configured `base_url`. If
`base_url` is specified, then these values from are given preference over any
Flask's generics.
BTW this patch normalize to use `url_for` in the `opensearch.xml` and drop the
need of `host` and `urljoin` in template's context.
Signed-off-by: Markus Heiser <markus@darmarit.de>
Instead of a hard-coded `oadoi.org` default, use the default value from
`settings.yml`.
Fix an issue in the themes: The replacement 'current_doi_resolver' contains the
doi_resolver_url, not the name of the DOI resolver. Compare return value of::
searx.plugins.oa_doi_rewrite.get_doi_resolver(...)
Fix a typo in `get_doi_resolver(..)`: suggested by @kvch:
*L32 should set doi_resolver not doi_resolvers*
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
fr.wikipedia.org (and it seems not other wikipedia websites),
adds HTML to api_result['displayTitle'].
(Search for '!wp :fr Braid' for example)
The commit uses api_result['title']
the json response has been changed and it contains html chunks which is
not compatible with our json engine, so we have to switch to html/xpath
parsing
The get_cliend_id() function:
* fetches https://soundcloud.com
* then fetches each referenced javascript URL to get the client id.
This commit fetches the javascript URLs in the reverse order: the client id is in the last javascript URL.
Added a line to the yacy entry to enable HTTP if the local yacy instance isn't using HTTPS. Otherwise, an error will be thrown in the logs: "No connection adapters were found for 'http://localhost:8090/yacysearch.json...'". This is likely related to ticket #2641 that forces HTTPS by default.
See https://github.com/requirejs/requirejs/issues/1816
requirejs loads one file: leaflet.
This commit:
* removes requirejs
* load leaflet using <script src...> HTML tag in searx/templates/oscar/base.html
Many things have been changed since last review of this engine. This patch fix
xpath selectors, implements suggestion and is a complete review / rewrite of the
engine.
Signed-off-by: Markus Heiser <markus@darmarit.de>
When initing engines a "SearxEngineResponseException" is logged very verbose,
including full traceback information:
ERROR:searx.engines:yggtorrent engine: Fail to initialize
Traceback (most recent call last):
File "share/searx/searx/engines/__init__.py", line 293, in engine_init
init_fn(get_engine_from_settings(engine_name))
File "share/searx/searx/engines/yggtorrent.py", line 42, in init
resp = http_get(url, allow_redirects=False)
File "share/searx/searx/poolrequests.py", line 197, in get
return request('get', url, **kwargs)
File "share/searx/searx/poolrequests.py", line 190, in request
raise_for_httperror(response)
File "share/searx/searx/raise_for_httperror.py", line 60, in raise_for_httperror
raise_for_captcha(resp)
File "share/searx/searx/raise_for_httperror.py", line 43, in raise_for_captcha
raise_for_cloudflare_captcha(resp)
File "share/searx/searx/raise_for_httperror.py", line 30, in raise_for_cloudflare_captcha
raise SearxEngineCaptchaException(message='Cloudflare CAPTCHA', suspended_time=3600 * 24 * 15)
searx.exceptions.SearxEngineCaptchaException: Cloudflare CAPTCHA, suspended_time=1296000
For SearxEngineResponseException this is not needed. Those types of exceptions
can be a normal use case. E.g. for CAPTCHA errors like shown in the example
above. It should be enough to log a warning for such issues:
WARNING:searx.engines:yggtorrent engine: Fail to initialize // Cloudflare CAPTCHA, suspended_time=1296000
closes: #2612
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The old xpath configuration for google scholar did not work and is replaced by a
python implementation.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
- unittest2 is a backport of the new features added to the unittest testing
framework in Python 2.7
- unittest2 was only needed in py2 and can be dropped now
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Bing has a list of regions that it supports and some of these regions
may have more than one possible language.
In some cases, like Switzerland, these languages are always shown as
options, so there is no issue. But in other cases, like Andorra, Bing
will only show one language at the time, either the region's default or
the request's language if the latter is supported by that region.
For example, if the HTTP request is in French, Andorra will appear as
fr-AD but if the same page is requested in any other language Andorra
will appear as ca-AD.
This is specially a problem when Bing assumes that the request is in
English because it overrides enough language codes to make several major
languages like Arabic dissappear from the languages.py file.
To avoid that issue, I set the Accept-Language header to a language
that's only supported in one region to hopefully avoid these overrides.
use a sparql request on wikidata to get the list of currencies.
currencies.json contains the translation for all supported searx languages.
Supersede #993
At the moment videos without a description are not shown - setting
default content to "" fixes this.
Another current bug is that thumbnails are not displayed. This is caused
by a double slash in the url. For this every trailing slash is now
stripped (for backwards compatibility) and the API response is correctly
parsed.
* searx understand "!ddg !g time" as : send "!g time" to DDG
* !g a DDG bang for Google: DDG return a HTTP redirect to Google
This commit adds a the allows_redirect param not to follow HTTP redirect.
The DDG engine returns a empty result as before without HTTP redirect.
on some queries (like an IT error message), wikipedia returns an HTTP error 400.
this commit returns an empty result instead of showing an error to the user.
Some JSON API returns HTML in either in the HTML or the content.
This commit adds two new parameters to the json_engine:
content_html_to_text and title_html_to_text, False by default.
If True, then the searx.utils.html_to_text removes the HTML tags.
Update crossref, openairedatasets and openairepublications engines
The duckduckgo engine requires an additional request after the results have been sent.
This commit makes sure that the second request uses the same HTTPAdapter
= the same IP address, and the same proxy.
The new version of MetaGer needs to reload the reults (into a iframe) with a
unique tag (see HTML response below).
Implementing a dedicated metager-engine for searx makes no sense to me. The
great days of MetaGer seems to be ended. I remember the good old days this
project started in the 90's of the last century. But in the last few years it
becomes more and more crap. As the name suggested, MetaGer was made for
germans in the first place. They have added a english and spain translation but
the i18n is very poor compared to what searx offers.
It's a pity, lets drop MetaGer.
This is the first response, the id (b82679980656899ba5a17ffd02a56846) is unique
for each query:
$ curl "https://metager.org/meta/meta.ger3?eingabe=foo&submit-query=&focus=web"
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<link rel="stylesheet" href="/index.css?id=b82679980656899ba5a17ffd02a56846">
<script src="/index.js?id=b82679980656899ba5a17ffd02a56846"></script>
<title>foo - MetaGer</title>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" />
</head>
<body>
<iframe id="mg-framed" src="https://metager.org/meta/meta.ger3?eingabe=foo&submit-query=&focus=web&mgv=b82679980656899ba5a17ffd02a56846" autofocus="true" onload="this.contentWindow.focus();"></iframe>
</body>
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Some of our interface locales include uppercase country codes,
which are separated by `_` instead of the more common `-`.
Also, a browser's `Accept-Language` header could be in lowercase.
This commit attempts to normalize those cases so a browser's
language+country codes can better match with our locales.
This solution assumes that our UI locales have nothing more than
language and optionally country. If we ever add a script specific
locale like `zh-Hant-TW` this would have to change to accomodate
that, but the idea would be pretty much the same as this fix.
The language_support variable is set to True by default,
and set to False in only 5 engines.
Except the documentation and the /config URL, this variable is not used.
This commit remove the variable definition in the engines, and
set value according to supported_languages length: False when the length is 0,
True otherwise.
Close#2485
Avoid SearxEngineXPathException errors when parsing non valid results::
.//div[@class="yuRUbf"]//a/@href index 0 not found
Traceback (most recent call last):
File "./searx/engines/google.py", line 274, in response
url = eval_xpath_getindex(result, href_xpath, 0)
File "./searx/searx/utils.py", line 608, in eval_xpath_getindex
raise SearxEngineXPathException(xpath_spec, 'index ' + str(index) + ' not found')
searx.exceptions.SearxEngineXPathException: .//div[@class="yuRUbf"]//a/@href index 0 not found
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
BTW: fix indentation by 2 spaces
The additional tests has been commented out in the google engines to not release
any CAPTCHA issues.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
BTW: make the engines ready for search.checker:
- replace eval_xpath by eval_xpath_getindex and eval_xpath_list
- google_images: remove outer try/except block
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The 'video.html' template from the 'oscar' design supports replacement
for *author* and *length*. Google-videos does not have an author, alternatively
the publisher info from is used for the *author*.
Hint: these replacements are not supported by the 'simple' design.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This revise is based on the methods developed in the revise of the google engine
(see commit 410c2f9).
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This revise is based on the methods developed in the revise of the google engine
(see commit 410c2f9).
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
the query "time" is convinient because most of the search engine will return some results,
but some engines in the general category will return documentation about the HTML tags <time> or <input type="time">
Removes module searx/brand.py and creates a namespace at searx.brand.
This patch is a first 'proof of concept'. Later we can decide to remove the
brand namespace entirely or not.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Without this commit the module searx checks the secret_key value.
With this commit, make docs, utils/standalone_searx.py,
utils/fetch_firefox_version.py works without SEARX_DEBUG=1
For reference see https://github.com/searx/searx/pull/2386
from_bang is True when the user query contains a bang.
In this case the category is also set to 'none'.
from_bang only usage was in searx.webadapter.parse_specific :
if from_bang is True, then the EngineRef category is ignored and force to 'none'.
This commit also removes the searx.webadapter.parse_sepecific function.
see searx.search.processors.abstract.EngineProcessor
First the method searx call the get_params method.
If the return value is not None, then the searx call the method search.
check HTTP response:
* detect some comme CAPTCHA challenge (no solving). In this case the engine is suspended for long a time.
* otherwise raise HTTPError as before
the check is done in poolrequests.py (was before in search.py).
update qwant, wikipedia, wikidata to use raise_for_httperror instead of raise_for_status
According to
820b468bfe/searx/engines/__init__.py (L87-L88)
an engine can have no category at all.
Without this commit, searx raise an exception in searx/results.py
Note: in this case, the engine is not shown in the preferences.
before commit 58d72f2, category was not set in xpath.py,
so searx/engines/__init__py was setting the category to ['general']
the commit 58d72f2 set the category to [] which is not replaced by searx/engines/__init__.py
consequence: the mojeek engine is hidden in the preferences.
this commit revert the xpath.py change.
close#2368
Add a new parameter "raise_for_status", set by default to True.
When True, any HTTP status code >= 300 raise an exception ( #2332 )
When False, the engine can manage the HTTP status code by itself.
- strip html tags and superfluous quotation marks from content
- remove not needed cookie from request
- remove superfluous imports
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Error pattern::
Engines cannot retrieve results:
digg (unexpected crash time data '2020-10-16T14:09:55Z' does not match format '%Y-%m-%d %H:%M:%S')
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
recoll is a local search engine based on Xapian:
http://www.lesbonscomptes.com/recoll/
By itself recoll does not offer web or API access,
this can be achieved using recoll-webui:
https://framagit.org/medoc92/recollwebui.git
This engine uses a custom 'files' result template
set `base_url` to the location where recoll-webui can be reached
set `dl_prefix` to a location where the file hierarchy as indexed by recoll can be reached
set `search_dir` to the part of the indexed file hierarchy to be searched, use an empty string to search the entire search domain
This change is backward compatible with the existing configurations.
If a settings.yml loaded from an user defined location (SEARX_SETTINGS_PATH or /etc/searx/settings.yml),
then this settings can relied on the default settings.yml with this option:
user_default_settings:True
Devian's request and response forms has been changed.
- fixed title
- fixed time_range_dict to 'popular-*-***'
- use image from <noscript> if exists
- drop obsolete "http to https, remove domain sharding"
- use query URL https://www.deviantart.com/search/deviations?page=5&q=foo
- add searx/engines/deviantart.py to pylint check (test.pylint)
Error pattern::
There DEBUG:searx:result: invalid title: {'url': 'https://www.deviantart.com/ ...
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
use
from searx.engines.duckduckgo import _fetch_supported_languages, supported_languages_url # NOQA
so it is possible to easily remove all unused import using autoflake:
autoflake --in-place --recursive --remove-all-unused-imports searx tests
* URL / : the index page displayed the selected or the default category.
* URL / : when the q parameter is set using the URL, the redirect includes the URL query.
* URL /search : an empty query doesn't raise an exception.
This makes it easier to separately handle search and index requests
from a web server or from a reverse proxy.
If a request to index contains a query, a permanent redirect HTTP response
is returned. This should give some level of backwards compatibility
for users that have set a searx instance in their browser's search bar.
Xpath engine and results template changed to account for the fact that
archive.org doesn't cache .onions, though some onion engines migth have
their own cache.
Disabled by default. Can be enabled by setting the SOCKS proxies to
wherever Tor is listening and setting using_tor_proxy as True.
Requires Tor and updating packages.
To avoid manually adding the timeout on each engine, you can set
extra_proxy_timeout to account for Tor's (or whatever proxy used) extra
time.
- remove paging support: a "vqd" parameter is required between each request. This parameter is uniq for each request
- update the URL (no redirect), use the POST method
- language support: works if there is no more than request per minute, otherwise it is ignored !
* Fix "?q=test&engines=wikipedia": fix exception
* Fix "?q=test&engines=wikipedia&categories=images": now the engines from images category are included.
* Fix parse_timeout: make sure a value is always returned
* Various typing fixes (searx.webadapter, searx.search.SearchQuery)
When the user add searx as a search engine, the browser loads the /opensearch.xml URL without the cookies.
Without the query parameters, the user preferences are ignored (method and autocomplete).
In addition, opensearch.xml is modified to support automatic updates,
see https://developer.mozilla.org/en-US/docs/Web/OpenSearch
Always call initialize engines except on the first run of werkzeug with the reload feature.
the reload feature is activated when:
* searx_debug is True (SEARX_DEBUG environment variable or settings.yml)
* FLASK_APP=searx/webapp.py FLASK_ENV=development flask run (see https://flask.palletsprojects.com/en/1.1.x/cli/ )
Fix SEARX_DEBUG=0 make docs
docs/admin/engines.rst : engines are initialized
See https://github.com/searx/searx/issues/2204#issuecomment-701373438
Since 1. October 2020 google has changed the 'class' attribute of the HTML
result page.
Fix the xpath expressions and ignore <div class="g" ../> sections which do not
match to title's xpath expression.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
requests 2.24.0 uses the ssl module except if it doesn't support SNI, in this case searx fallbacks to pyopenssl.
searx logs a critical message and exit if the ssl modules doesn't support SNI and pyOpenSSL is not installed.
searx logs a critical message and exit if the ssl version is older than 1.0.2.
in requirements.txt, pyopenssl is still required to install searx as a fallback.
was previously a Dict with two or three keys: name, category, from_bang
make clear that this is a engine reference (see tests/unit/test_search.py for example)
all variables using this class are renamed accordingly.
* Log each call to get_locale: display the URL, the locale and the source (browser, preferences, form).
* Rename _get_browser_language to _get_browser_or_settings_language to match the actual code.
AJAX requests send the X-Requested-With HTTP header,
so searx.webapp.autocompleter returns the results with the expected data format.
Related to #2127Close#2203
A new "base" engine called command is introduced. It is the foundation for all command line engines for now.
You can use this engine to create your own command line engine.
Add some engines (commented out to make sure no one enables anything accidentally):
* git grep: This engine lets you grep in the searx repo.
* locate: If locate is installed and initialized, you can search on the FS.
* find: You can find files with a specific name from where you started searx.
* pattern search in files: This engine utilizes the command fgrep.
* regex search in files: This engine runs `grep` to find a file based on its contents.
and some other exceptions:
* KeyboardInterrupt
* SystemExit
* RuntimeError
* SystemError
* ImportError: an engine with an unmet dependency will stop everything.
Sending queries through POST, while better for privacy, breaks functionality
with certain extensions (e.g. Firefox containers). Since Firefox does
not send cookies when requesting `/opensearch.xml`, users cannot easily
switch to GET on the client side unless they make a custom search
engine. This commit allows admins to modify the default method on their
side so they can set it to GET if needed.
Sending query params over GET seems to be the only way to be able to
enable autocomplete in the browser. This commit adds the necessary URL
formatting to opensearch.xml. In order to identify queries coming from
the URL bar (rather than an AJAX request), which requires a different
JSON format and MIME type, the request headers are checked for
"X-Requested-With: XMLHttpRequest" which is added by jQuery request.
- enabling HTTPS for sci-hub.tw by default
- making sci-hub the default DOI resolver as it has the largest collection of scientific articles.
- replaced doai.io with dissem.in, as it redirects to this new domain.
Co-authored-by: Aurora of Earth <auroraofearth@ya.ru>
* Made first attempt at the bangs redirects plugin.
* It redirects. But in a messy way via javascript.
* First version with custom plugin
* Added a help page and a operator to see all the bangs available.
* Changed to .format because of support
* Changed to .format because of support
* Removed : in params
* Fixed path to json file and changed bang operator
* Changed bang operator back to &
* Made first attempt at the bangs redirects plugin.
* It redirects. But in a messy way via javascript.
* First version with custom plugin
* Added a help page and a operator to see all the bangs available.
* Changed to .format because of support
* Changed to .format because of support
* Removed : in params
* Fixed path to json file and changed bang operator
* Changed bang operator back to &
* Refactored getting search query. Also changed bang operator to ! and is now working.
* Removed prints
* Removed temporary bangs_redirect.js file. Updated plugin documentation
* Added unit test for the bangs plugin
* Fixed a unit test and added 2 more for bangs plugin
* Changed back to default settings.yml
* Added myself to AUTHORS.rst
* Refacored working of custom plugin.
* Refactored _get_bangs_data from list to dict to improve search speed.
* Decoupled bangs plugin from webserver with redirect_url
* Refactored bangs unit tests
* Fixed unit test bangs. Removed dubbel parsing in bangs.py
* Removed a dumb print statement
* Refactored bangs plugin to core engine.
* Removed bangs plugin.
* Refactored external bangs unit tests from plugin to core.
* Removed custom_results/bangs documentation from plugins.rst
* Added newline in settings.yml so the PR stays clean.
* Changed searx/plugins/__init__.py back to the old file
* Removed newline search.py
* Refactored get_external_bang_operator from utils to external_bang.py
* Removed unnecessary import form test_plugins.py
* Removed _parseExternalBang and _isExternalBang from query.py
* Removed get_external_bang_operator since it was not necessary
* Simplified external_bang.py
* Simplified external_bang.py
* Moved external_bangs unit tests to test_webapp.py. Fixed return in search with external_bang
* Refactored query parsing to unicode to support python2
* Refactored query parsing to unicode to support python2
* Refactored bangs plugin to core engine.
* Refactored search parameter to search_query in external_bang.py
Previously only image/jpeg was not proxied.
This commit don't proxify all MIME types starting with "image/".
This is a quick fix for the PR #1985 : the google_image engine can returns some data URL.
A new option is added to engines to hide error messages from users. It
is called `display_error_messages` and by default it is set to `True`.
If it is set to `False` error messages do not show up on the UI.
Keep in mind that engines are still suspended if needed regardless of
this setting.
Closes#1828
The gigablast API has changed and seems to have some quirks, this is the first
revise. More work (hacks) are needed.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Since there are zero results, we can remove it:
$ make engines.languages
fetch languages ..
...
fetched 0 languages from engine gigablast
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Inline styles are blocked by default with Content Security Policy (CSP). Move
the rest of inline styles to CSS and correct the HTML template of the oscar
preference page.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
A *brand* of searx is a fork which might have its own design and some special
functions which might bee reasonable in a special context.
In this sense, the fork might have its own documentation but not its own issue
tracker. The *upstream* of a brand is always https://github.com/asciimoo from
where the brand-fork pulls the master branch regularly. A fork which has its
own issue tracker is a spin-off and out of the scope of the searx project
itself. The conclusion is:
- hard code ISSUE_URL (in the Makefile)
- always refer to DOCS_URL
- links in the about page refer to the *upstream* (searx project)
except DOCS_URL
- "fork me on github" ribbons refer to the *upstream*
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
We have some variables in the build environment which are also needed in the
grunt process when building themes. Theses variables are relavant if one
creates a fork with its own branding. We treat these variables under the term
'brands'.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
We have some variables in the build environment which are also needed in the
templating process. Theses variables are relavant if one creates a fork with
its own branding. We treat these variables under the term 'brands'.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
datetime.parser.parse() does not know the Spanish date format which
leads to a ValueError. Fixes#1870
Traceback (most recent call last):
File "/usr/local/searx/searx/search.py", line 160, in search_one_http_request_safe
search_results = search_one_http_request(engine, query, request_params)
File "/usr/local/searx/searx/search.py", line 97, in search_one_http_request
return engine.response(response)
File "/usr/local/searx/searx/engines/startpage.py", line 102, in response
published_date = parser.parse(date_string, dayfirst=True)
File "/usr/local/searx/searx-ve/lib/python3.6/site-packages/dateutil/parser/_parser.py", line 1358, in parse
return DEFAULTPARSER.parse(timestr, **kwargs)
File "/usr/local/searx/searx-ve/lib/python3.6/site-packages/dateutil/parser/_parser.py", line 649, in parse
raise ValueError("Unknown string format:", timestr)
ValueError: ('Unknown string format:', '24 Ene 2013')
When selecting other languages than 'en', bing-video did not handle the language
correct and gave very bad results. Since User-Agent is normaly rotated in
searx, the behavior of a !biv search was unpredictable and paging was broken.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
The bing_news bug (discussed in #1838) was caused by wrong language tags, which
was fixed e0c99d9d / no need to change the bing_news search string.
closes: https://github.com/asciimoo/searx/issues/1838
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
To get meaningfull diffs, the json file has to be sorted. Before applying any
further content patch, the json file needs a inital sort (without changing any
content).
Sorted by::
import sys, json
with open('engines_languages.json') as f:
j = json.load(f)
with open('engines_languages.json', 'w') as f:
json.dump(j, f, indent=2, sort_keys=True)
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
When results are fetched from any programming related documentation site
(like git-scm.com, docs.python.org etc), content in Info box is shown as
raw HTML code.
This change addresses the issue by using "safe" filter feature provided by
Django. See,
- https://docs.djangoproject.com/en/3.0/ref/templates/builtins/#safe
- Searx issue tracker (issue #1649), for more information.
Resolves: #1649
When results are fetched from any programming related documentation site
(like git-scm.com, docs.python.org etc), content in Info box is shown as
raw HTML code.
This change addresses the issue by using "safe" filter feature provided by
Django. See,
- https://docs.djangoproject.com/en/3.0/ref/templates/builtins/#safe
- Searx issue tracker (issue #1649), for more information.
Resolves: #1649
In low width devices like mobile, tablet etc, info box is present at
bottom of the page.
This change addresses the issue by rearranging column grids for low
width devices and move side bar at top of the page. See
- https://getbootstrap.com/docs/3.3/css/#grid-column-ordering.
- and Searx issue tracker (issue#1777), for more information.
Effect: Along with Info, Suggestion and Link boxes also move to top of
the page.
Resolves: #1777
Infinite scroll adds a `hr` tag to split up the sections loaded by it.
The vim bindings `j` and `k`, which jump to the next and previous result
respectively, search for a **direct** sibling with the class `result`.
With the `hr` between results a direct sibling cannot be found. To fix
this we remove the restriction of it having to be a direct sibling.
Adding a CR in some files and in others not, is a good starting point for a
DOS+Unix mess we all have already seen in many projects.
Patch fixes all files matching (even those comming from grunt's build)::
find ./searx -exec file {} \; | grep CR
BTW: Same with mixing TAB and SPACE indent styles in one and the same file. So
if sources are tuched here in this patch, its also fixed.
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Fix this error while travis build::
/home/travis/build/asciimoo/searx/searx/engines/duckduckgo_definitions.py:21:44: E225 missing whitespace around operator
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Add image format and source information to display - needs changes to engines to actually display something.
Displays result.source (website from which the image was taken) and result.img_format (image type and size).
Result is styled with result-format and result-source classes. See PR #1566 for an example of an engine which has the necessary changes.
Strip <span class="highlight">...</span> in the oscar image template.
This PR fixes the result count from bing which was throwing an (hidden) error and add a validation to avoid reading more results than avalaible.
For example :
If there is 100 results from some search and we try to get results from 120 to 130, Bing will send back the results from 0 to 10 and no error. If we compare results count with the first parameter of the request we can avoid this "invalid" results.
The new url parameter "timeout_limit" set timeout limit defined in second.
Example "timeout_limit=1.5" means the timeout limit is 1.5 seconds.
In addition, the query can start with <[number] to set the timeout limit.
For number between 0 and 99, the unit is the second :
Example: "<30 searx" means the timeout limit is 3 seconds
For number above 100, the unit is the millisecond:
Example: "<850 searx" means the timeout is 850 milliseconds.
In addition, there is a new optional setting: outgoing.max_request_timeout.
If not set, the user timeout can't go above searx configuration (as before: the max timeout of selected engine for a query).
If the value is set, the user can set a timeout between 0 and max_request_timeout using
<[number] or timeout_limit query parameter.
Related to #1077
Updated version of PR #1413 from @isj-privacore
Characters that were not ASCII were incorrectly decoded.
Add an helper function: searx.utils.ecma_unescape (Python implementation of unescape Javascript function).
* Search URL is https://www.wikidata.org/w/index.php?{query}&ns0=1 (with ns0=1 at the end to avoid an HTTP redirection)
* url_detail: remove the disabletidy=1 deprecated parameter
* Add eval_xpath function: compile once for all xpath.
* Add get_id_cache: retrieve all HTML with an id, avoid the slow to procress dynamic xpath '//div[@id="{propertyid}"]'.replace('{propertyid}')
* Create an etree.HTMLParser() instead of using the global one (see #1575)
Fetch complete JSON data block, use legend to extract images.
Unquote urlencoded strings.
Add image description as 'content'.
Add 'img_format' and 'source' data (needs PR #1567 to enable this data to be displayed).
Show images which lack ownerid instead of discarding them.
use data from embedded JSON to improve results (e.g. real page title), add image format and source info (see PR #1567), improve paging logic (it now works)
Server Timing specification: https://www.w3.org/TR/server-timing/
In the browser Dev Tools, focus on the main request, there are the responses per engine in the Timing tab.
doi_resolvers / default_doi_resolver were missing in the settings_robots.yml file, so the test server was not able to start (crash). Since the output wasn't displayed, it was not obvious why the Selenium couldn't connect to searx.
Locale and search language was always defined with english value.
This patch inits the locale on `pre_request` in order to define the
default value of locale and language preferences.
Plus the `best_match` function provided by flask babel library did not
work as expected. So the function `match_language` provided
by searx is used to detect that the language from Accepted-Language
header can be used in searx project.
This improves the user experience by loading in the next entries shortly before him getting to the bottom. It makes the scrolling more smooth without a break in between.
It also fixes an error on my browser that scrolling never hits the defined number. When I debugged it I hit `.scrolltop` of 1092.5 and the `doc.height - win.height` of 1093, so the condition was never true.
- Because there is not full image url in the dom, we replace "image_url" with the same url as the "url" (url of source).
See example HTML https://gist.github.com/Nachtalb/2dea8a4d2c723c49226ad9645838121f
- Remove unused import
- Fix google image search title
- Keep google image safe value up to date