中文分词之Java实现使用IK Analyzer实现
http://blog.csdn.net/lijun7788/article/details/7719166#
IK Analyzer是基于lucene实现的分词开源框架,下载路径:http://code.google.com/p/ik-analyzer/downloads/list
需要在项目中引入:
IKAnalyzer.cfg.xml
IKAnalyzer2012.jar
lucene-core-3.6.0.jar
stopword.dic
什么都不用改
示例代码如下(使用IK Analyzer):
package com.haha.test; import java.io.IOException; import java.io.StringReader; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.wltea.analyzer.lucene.IKAnalyzer; public class Test2 { public static void main(String[] args) throws IOException { String text="基于java语言开发的轻量级的中文分词工具包"; //创建分词对象 Analyzer anal=new IKAnalyzer(true); StringReader reader=new StringReader(text); //分词 TokenStream ts=anal.tokenStream("", reader); CharTermAttribute term=ts.getAttribute(CharTermAttribute.class); //遍历分词数据 while(ts.incrementToken()){ System.out.print(term.toString()+"|"); } reader.close(); System.out.println(); } }
运行后结果:
基于|java|语言|开发|的|轻量级|的|中文|分词|工具包|
使用(lucene)实现:
package com.haha.test; import java.io.IOException; import java.io.StringReader; import org.wltea.analyzer.core.IKSegmenter; import org.wltea.analyzer.core.Lexeme; public class Test3 { public static void main(String[] args) throws IOException { String text="基于java语言开发的轻量级的中文分词工具包"; StringReader sr=new StringReader(text); IKSegmenter ik=new IKSegmenter(sr, true); Lexeme lex=null; while((lex=ik.next())!=null){ System.out.print(lex.getLexemeText()+"|"); } } }
相关推荐
spylyt 2020-09-11
天才幻想家 2020-08-03
vtnews 2020-07-29
xiaocao0 2020-06-25
fkyyly 2020-05-31
winxcoder 2020-04-19
tigercn 2020-04-18
athrenzala 2020-04-17
chongtianfeiyu 2020-04-10
houhow 2020-02-18
fkyyly 2020-01-28
李玉志 2020-01-17
mengyue 2020-01-01
江夏lz 2014-05-31
李玉志 2019-12-25
小发猫 2019-12-02
某某某 2016-08-02