I just tested the 47 servers listed in https://searx.space/data/instances.json. I did not use a browser. No Javascript, cookies, etc.. A good number of them worked fine.
Who knows what the people running those instances do with the search data they acquire.
What I like about searx though is the list of search engines it potentially targets. Comprehensive lists of search engines on the internet are always valuable. I see searx as a supply of "parts" with which one can make something of their own. I have made a metasearch utility for myself.
C. Can open an index.html of saved search results in any browser; each query gets its own SERP; search results are saved in a directory that can be tarballed and compressed allowing simple transfer to any computer with a UNIX userland
D. Easy to add new sites; follows a failry standard template; currently at only eight sites, but adding more (like the ones in searx)
E. Requires only standard UNIX utilities; consists of small shell scripts of less than 2000 chars
F. Fast; no cruft
Unique features:
1. Streamlined SERP; URLs only, minimal HTML, i.e., <a>, <pre>, <ol>, <li>, <!-- -->; no images, Javascript or CSS; SERP contains timestamps in HTML comments to indicate when each query was submitted
2. Each SERP contains deduped batches of results from different search engines; source search engine indicated by short prefix; if desired, can resort to intersperse results from different sources, e.g., sort by URL
3. Continuation of search; allows retrieval 100s of results by spreading searches across periods of time too long for websites to track, thus allowing retrieval of large numbers of search results while avoiding ridiculously small result limits or temporary bans for "searching too fast" <--- I could not find anyone else using this approach
4. By default only minimum headers sent; custom headers can be sent when appropriate for particular site, e.g., DNT to findx.com; allows for complete customisation of presence/absence/content/order/case of HTTP headers, thus can potentially emulate any browser or other HTTP client (also supports HTTP/1.1 pipelining which curl cannot do)
5. Can be used with any TCP client; not limited to one library, e.g., libcurl; works great with proxies like stunnel and haproxy
6. URL params or hidden form fields that can potentially be used to link one SERP with another SERP are removed or rendered ineffective
Who knows what the people running those instances do with the search data they acquire.
What I like about searx though is the list of search engines it potentially targets. Comprehensive lists of search engines on the internet are always valuable. I see searx as a supply of "parts" with which one can make something of their own. I have made a metasearch utility for myself.