Standard Tokenizer(标准分词器)

standardtokenizer(标准分词器)提供基于语法的分词(基于Unicode文本分割算法,如Unicode标准附件29中所述),并且适用于大多数语言。

输出示例

POST _analyze
{
  "tokenizer": "standard",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

上面的句子会生成如下的词元:

[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]

配置

standard tokenizer(标准分词器) 接受以下参数:

max_token_length 单个 token 的最大长度。如果一个 token 超过这个长度,则以max_token_length 为间隔分割。默认值是255.。

配置示例

下面的例子中,我们配置标准分词器的 max_token_length为 5 (便于展示):

PUT my_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "standard",
          "max_token_length": 5
        }
      }
    }
  }
}

POST my_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}

输出如下:

[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]

Last updated