搜索网站的HTML文件的搜索栏

编程入门 行业动态 更新时间:2024-10-24 22:19:03
本文介绍了搜索网站的HTML文件的搜索栏的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我目前正在使用Neocities来构建我的网站。这是因为它是我发现的少数网站之一,没有有限的网站建设者,你可以自己编写网站代码,服务器费用是为你支付的。这方面的缺点是该网站故意不支持PHP;我没有得到明确的答案,但我离题了。

I'm currently using Neocities to build my website. The reason for this is because it's one of the few sites I've found where there's no limited website builder and you get to code the website yourself, and the server costs are paid for you. The downside to this is that the site deliberately does not support PHP; I haven't been given a clear answer as to why, but I digress.

我想建立一个搜索我网站的HTML文件的搜索栏。我认为没有PHP就不难做到;文件已经存在,只需要搜索它们。问题是,如果它不是通过PHP完成的,它是通过Javascript完成的,我不知道。我想知道是否有人可以指导我将其实施到我的网站中。我不想使用谷歌自定义搜索等引擎,因为他们将水印放在栏中,看起来非常不专业。

I'd like to build a search bar that searches my site's HTML files. I assume that's not hard to do without PHP; the files are already there, it just needs to search for them. Problem is, if it's not done through PHP, it's done though Javascript, which I don't know. I was wondering if someone could guide my through implementing this into my website. I'd prefer not to use engines like Google Custom Search as they put their watermark in the bar and it looks very unprofessional.

推荐答案

您至少有几种选择: Lunr 和搜索索引。它们都可以在浏览器中运行。

You have at least a couple of alternatives: Lunr and search-index. They both can run in the browser.

我正在做一些关于搜索索引的工作,所以这是我最了解的一个,但我认为两者都是它们对你的用例很有用。

I'm doing some work on search-index, so that is the one I know the most about, but I think both of them are good for your use case.

正如其他一些人所说,你需要解决如何将所有页面的内容添加到一个搜索中。我想最容易的是,如果你可以将你的内容作为静态页面和作为一个JSON对象数组提供给搜索引擎。然后,您可以将搜索索引存储在leveldb(Chrome)或indexeddb(Firefox)中,直到下一次定期用户查看该网站。

As some of the others say, you need to solve how add the content of all the pages into one search. I guess the easiest is if you can have your content as static pages and as an array of JSON objects that is fed to the search engine. Then you can have the search index stored in leveldb (Chrome) or indexeddb (Firefox) until the next time a recurring user looks at the website.

如果您有链接到网站,想要尝试这样的解决方案我正在制作一个图书馆为了它。

If you have the link to the website and want to try out a solution like this I'm in the process of making a library for it.

到目前为止,我已经掌握了爬虫运行的基础知识。这意味着您必须为脚本提供要爬网的所有URL,但是对于可管理的小型站点。我会让它找到并抓取网址。

So far I got the basics of the crawler running. It means you have to give the script all the URLs you want to crawl, but for a small site that's manageable. I'll make it find and crawl url's later.

const cheerio = require('cheerio') url = 'example/a-page-to-crawl.html' fetch(url) .then(function(response) { return response.text() }) .then(function(html) { const $ = cheerio.load(html) var title = $('title').text() var body = $('body').text() var item = {url: url, title: title, body: body} console.log(item) })

该文件需要使用brfs插件进行浏览:

The file needs to be browserified with the brfs plugin:

browserify -t brfs main.js --debug -o bundle.js

更多推荐

搜索网站的HTML文件的搜索栏

本文发布于:2023-11-28 19:32:30,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1643683.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:搜索网站   文件   HTML

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!