只需几个简单的步骤即可启动并运行Flink示例程序。
1. 下载并启动Flink
Flink可在Linux,Mac OS X和Windows上运行。 为了能够运行Flink,唯一的要求是安装Java 7.x(或更高版本)。
从下载页面下载Flink (http://flink.apache.org/downloads.html )。 可以使用本地文件系统,任何Hadoop版本都可以正常工作。切换到下载目录,解压缩下载的文件。
$ cd~ / Downloads#转到下载目录 $ tar xzf flink - * .tgz#解压缩下载的档案 $ cd flink-1.2
2. 启动本地Flink群集
$ ./bin/start-local.sh
检查JobManager的web前端http:// localhost:8081并确保一切正常运行。 Web前端会显示单个可用的TaskManager实例。
可以通过检查logs目录中的日志文件来验证系统是否正在运行:
$ tail log/flink-*-jobmanager-*.log INFO ... - Starting JobManager INFO ... - Starting JobManager web frontend INFO ... - Web frontend listening at 127.0.0.1:8081 INFO ... - Registered TaskManager at 127.0.0.1 (akka://flink/user/taskmanager)
3.示例源码
可以在Scala和Java上的GitHub上找到此SocketWindowWordCount示例的完整源代码。
public class SocketWindowWordCount { public static void main(String[] args) throws Exception { // the port to connect to final int port; try { final ParameterTool params = ParameterTool.fromArgs(args); port = params.getInt("port"); } catch (Exception e) { System.err.println("No port specified. Please run 'SocketWindowWordCount --port <port>'"); return; } // get the execution environment final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); // get input data by connecting to the socket DataStream<String> text = env.socketTextStream("localhost", port, "\n"); // parse the data, group it, window it, and aggregate the counts DataStream<WordWithCount> windowCounts = text .flatMap(new FlatMapFunction<String, WordWithCount>() { @Override public void flatMap(String value, Collector<WordWithCount> out) { for (String word : value.split("\\s")) { out.collect(new WordWithCount(word, 1L)); } } }) .keyBy("word") .timeWindow(Time.seconds(5), Time.seconds(1)) .reduce(new ReduceFunction<WordWithCount>() { @Override public WordWithCount reduce(WordWithCount a, WordWithCount b) { return new WordWithCount(a.word, a.count + b.count); } }); // print the results with a single thread, rather than in parallel windowCounts.print().setParallelism(1); env.execute("Socket Window WordCount"); } // Data type for words with count public static class WordWithCount { public String word; public long count; public WordWithCount() {} public WordWithCount(String word, long count) { this.word = word; this.count = count; } @Override public String toString() { return word + " : " + count; } } }
4. 运行示例
现在,将运行此Flink应用程序。 它将从套接字(socket)读取文本,并且每5秒打印一次在前5秒内每个不同单词的出现次数,即使处理时间窗口,只要有文字输入。
首先,使用netcat来启动本地服务器:
nc -l 9000
提交Flint客户端程序;
$ ./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000 Cluster configuration: Standalone cluster with JobManager at /127.0.0.1:6123 Using address 127.0.0.1:6123 to connect to JobManager. JobManager web interface address http://127.0.0.1:8081 Starting execution of program Submitting job with JobID: 574a10c8debda3dccd0c78a3bde55e1b. Waiting for job completion. Connected to JobManager at Actor[akka.tcp://flink@127.0.0.1:6123/user/jobmanager#297388688] 11/04/2016 14:04:50 Job execution switched to status RUNNING. 11/04/2016 14:04:50 Source: Socket Stream -> Flat Map(1/1) switched to SCHEDULED 11/04/2016 14:04:50 Source: Socket Stream -> Flat Map(1/1) switched to DEPLOYING 11/04/2016 14:04:50 Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to SCHEDULED 11/04/2016 14:04:51 Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to DEPLOYING 11/04/2016 14:04:51 Fast TumblingProcessingTimeWindows(5000) of WindowedStream.main(SocketWindowWordCount.java:79) -> Sink: Unnamed(1/1) switched to RUNNING 11/04/2016 14:04:51 Source: Socket Stream -> Flat Map(1/1) switched to RUNNING
程序连接到套接字 (Socket)并等待输入, 可以检查Web界面以验证作业是否按预期运行:
单词在5秒的时间窗口中被计算(处理时间)并打印到标准输出窗口。 监视JobManager的输出文件并在nc中写入一些文本:
$ nc -l 9000 lorem ipsum ipsum ipsum ipsum bye
只要有文字输入,.out文件就会在每个时间窗口的末尾打印输入文字的计数,例如:
$ tail -f log/flink-*-jobmanager-*.out lorem : 1 bye : 1 ipsum : 4
关停Flink 服务器,可以使用:
$ ./bin/stop-local.sh
?Flink 示例演示