1
0
mirror of https://github.com/searxng/searxng.git synced 2024-11-22 12:10:11 +01:00

[fix] drop etools engine module

The implementation of the etools engine is poor.  No date-range support, no
language support and it is broken by a CAPTCHA.

etools is a metasearch engine, the major search engines it supports (google,
bing, wikipedia, Yahoo) are already available in SeaarXNG.

While etools does support several engines we currently don't support directly,
support for them should be added directly to SearXNG if there is demand.

In practice: in SearXNG the worse etools results will be mixed with good results
from other engines we have (as long as there is no captcha).

At best case, what we win with etools is in e.g. results from de.ask.com in a
query from a german request .. in all other cases worse results are bubble up in
SearXNG's result list.

[1] https://github.com/searxng/searxng/issues/696#issuecomment-1005855499

Closes: https://github.com/searxng/searxng/issues/696
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
This commit is contained in:
Markus Heiser 2022-01-06 14:42:28 +01:00
parent 93c6829b27
commit 5dd3442f83
2 changed files with 0 additions and 65 deletions

View File

@ -1,58 +0,0 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
"""
eTools (Web)
"""
from lxml import html
from urllib.parse import quote
from searx.utils import extract_text, eval_xpath
# about
about = {
"website": 'https://www.etools.ch',
"wikidata_id": None,
"official_api_documentation": None,
"use_official_api": False,
"require_api_key": False,
"results": 'HTML',
}
categories = ['general', 'web']
paging = False
safesearch = True
base_url = 'https://www.etools.ch'
search_path = (
# fmt: off
'/searchAdvancedSubmit.do'
'?query={search_term}'
'&pageResults=20'
'&safeSearch={safesearch}'
# fmt: on
)
def request(query, params):
if params['safesearch']:
safesearch = 'true'
else:
safesearch = 'false'
params['url'] = base_url + search_path.format(search_term=quote(query), safesearch=safesearch)
return params
def response(resp):
results = []
dom = html.fromstring(resp.text)
for result in eval_xpath(dom, '//table[@class="result"]//td[@class="record"]'):
url = eval_xpath(result, './a/@href')[0]
title = extract_text(eval_xpath(result, './a//text()'))
content = extract_text(eval_xpath(result, './/div[@class="text"]//text()'))
results.append({'url': url, 'title': title, 'content': content})
return results

View File

@ -479,13 +479,6 @@ engines:
timeout: 3.0 timeout: 3.0
disabled: true disabled: true
- name: etools
engine: etools
shortcut: eto
disabled: true
additional_tests:
rosebud: *test_rosebud
- name: etymonline - name: etymonline
engine: xpath engine: xpath
paging: true paging: true