个人工具

“测试”的版本间的差异

来自Ubuntu中文

跳转至: 导航, 搜索
(新页面: (don't offer the possibility)    admin    zh-tw    (不提供這個項目)<br><em>but</em> display at least this many entries in r...)
 
 
(未显示2个用户的43个中间版本)
第1行: 第1行:
(don't offer the possibility)&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;(不提供這個項目)<br>&lt;em&gt;but&lt;/em&gt; display at least this many entries in recentchanges and other subscription lists:&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;&lt;em&gt;但是&lt;/em&gt;在最新異動中顯示至少這些資料與其他訂購清單:<br>add document to category&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;新增這個類別的文件<br>added&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;已經新增<br>administration features are disabled for this wiki.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;管理功能是被關閉的。<br>admins&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;管理者<br>after how many days should old versions of a page be removed (0 for never)&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;一個舊版本的頁面要保留多少天?(0表示永久保留)<br>allow anonymous access&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;允許訪客存取<br>anonymous password&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;訪客密碼<br>anonymous session type&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;訪客連線種類<br>anonymous username&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;訪客暱稱<br>automatically convert pages with wiki-syntax to richtext (if edited)?&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;自動轉換wiki與法成為網頁格式(編輯後)?<br>block&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;鎖定<br>block / unblock hosts&nbsp;&nbsp; &nbsp;common&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;鎖定 / 解除鎖定主機<br>blocked ip address ranges&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;鎖定 IP 位址區段<br>cancel without saving&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;取消不儲存<br>changed&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;已經修改<br>changes by last author&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;最後一位作者的異動<br>choose your current time here, so the server may figure out what time zone you are in.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;在這裡選擇您目前的時間,系統會自動計算您所在位置的時區。<br>columns&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;欄位<br>compute difference&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;比對異動資料<br>converts the page to richtext&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;轉換成網頁<br>current version&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;目前版本<br>deleted&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;已刪除<br>deletes this page&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;刪除這一頁<br>difference between versions&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;版本之間的差異<br>differences in&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;版本差異<br>different languages can have different titles&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;不同的語言可以使用不同的標題<br>document last modified&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;最後異動日期<br>edit box&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;編輯區<br>edit this &lt;em&gt;archive version&lt;/em&gt; of this document&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;編輯這個文件的&lt;em&gt;保存版本&lt;/em&gt;<br>edit this document&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;編輯這個文件<br>edit with preview&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;編輯與預覽<br>editable&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;可編輯<br>editing&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;編輯中<br>emailaddress administrator&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;管理者信箱<br>enable free links&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;啟用自由連結<br>enable wiki links&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;啟用Wiki連結<br>enter here the maximum number of entries to display in a document's history list.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;輸入文件異動記錄最多顯示幾筆資料。<br>enter here the number of days of edits to display on recentchanges or any other subscription list.&nbsp; set this to zero if you wish to see all pages in recentchanges, regardless of how recently they were edited.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;輸入最新異動清單或其他訂閱清單要顯示資料的天數,如果您希望看到全部可以設為0 。<br>enter ip address range in form &lt;tt&gt;12.*&lt;/tt&gt;, &lt;tt&gt;34.56.*&lt;/tt&gt;, or &lt;tt&gt;78.90.123.*&lt;/tt&gt;&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;輸入 IP 位址區段,格式為 &lt;tt&gt;12.*&lt;/tt&gt;, &lt;tt&gt;34.56.*&lt;/tt&gt;, 或&lt;tt&gt;78.90.123.*&lt;/tt&gt;<br>error creating temporary file.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;建立暫存檔案時發生錯誤<br>error writing to temporary file.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;寫入暫存檔案時發生錯誤<br>everyone&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;所有人<br>find&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;搜尋<br>history&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;歷史<br>history display should show &lt;em&gt;all&lt;/em&gt; changes made by the latest author.&nbsp; otherwise, show only the last change made.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;歷史會顯示最後一位作者&lt;em&gt;所有&lt;/em&gt;的異動。<br>history lists&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;歷史清單<br>history of&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;歷史<br>interwikiprefix&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;Wiki文中前置字元<br>invalid page name.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;錯誤的頁面名稱<br>load&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;讀取<br>loads the named page in the given language, all change so far get lost !!!&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;讀取指定語言的頁面,所有的異動將會遺失!<br>lock / unlock pages&nbsp;&nbsp; &nbsp;common&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;鎖定 / 取消鎖定頁面<br>locked&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;已鎖定<br>name wiki home link&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;wiki 首頁連結<br>newer&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;更新<br>no never&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;從未<br>no only on request&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;否,只有在發出請求時<br>not set&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;未設定<br>older&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;更舊<br>on all pages&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;在全部頁面<br>only if browser supports a richtext-editor&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;請確認您的瀏覽器支援所見即所得編輯器<br>only on the first page&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;只在首頁<br>page '%1' not found !!!&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;找不到 '%1' 頁面!<br>path of the upload directory (has to be writable by the webserver!)&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;上傳檔案的路徑(網頁伺服器要能夠寫入!)<br>picture upload via richtext editor (leave the upload directory empty to disable the upload)&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;透過所見即所得編輯器上傳圖片(上傳資料夾空白表示停用上傳功能)<br>please contact the %1administrator%2 for assistance.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;請聯絡%1管理員%2。<br>preview&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;預覽<br>preview of current version&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;預覽目前版本<br>rate control / ip blocking disabled&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;評分控制 / 取消IP鎖定<br>readable by&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;可讀取<br>readonly&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;唯讀<br>recent changes&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;最新異動<br>renames page to the given name and language&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;指定名稱和語言來修改頁面<br>richtext&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;網頁<br>rows&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;列<br>save the changes and exit&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;儲存後離開<br>saves and continues editing&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;儲存後繼續編輯<br>search for&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;搜尋<br>see complete list (%1 entries)&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;顯示完整列表(%1 筆資料)<br>show a search&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;顯示搜尋<br>show the title of the wiki page&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;顯示 wiki 頁面的標題<br>site configuration&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;網站設定<br>summary&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;概要<br>summary of change&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;異動概要<br>the search returned no result!&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;沒有找到任何資料!<br>this page can not be edited.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;這篇文章無法邊擊<br>twin pages&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;重複頁<br>unblock&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;取消鎖定<br>updates the preview&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;更新預覽<br>url of the upload directory&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;上傳資料夾對應的網址<br>use this module for displaying wiki-pages&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;用這個模組來顯示 wiki 頁面<br>users&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;使用者<br>view document history&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;顯示文件歷史<br>visit %1 to set your user name&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;在 %1 設定您的名稱<br>warning: since you started editing, this document has been changed by someone else.&nbsp; please merge your edits into the current version of this document.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;注意:當您開始編輯後,這份文件也被其他人編輯過了,請整合您編輯的資料到目前的版本。<br>who should be able to edit this page&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;誰能夠編輯這份文件<br>who should be able to read this page&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;誰能夠閱讀這份文件<br>wiki&nbsp;&nbsp; &nbsp;common&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;Wiki<br>wiki administration&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;Wiki管理<br>wiki menu&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;Wiki選單<br>wiki startpage&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;wiki 首頁<br>writable by&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;可修改<br>yes always&nbsp;&nbsp; &nbsp;admin&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;全部皆是<br>you have been denied access to this site.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;您沒有權限瀏覽這個網站!<br>you have entered an invalid user name.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;您輸入了一個錯誤的名稱<br>you have exeeded the number of pages you are allowed to visit in a given period of time. please return later.&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;您在同一段時間裡存取了超過系統設定的資料量,請稍後再回來繼續閱讀。<br>you have no rights to view wiki content !!!&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;您沒有檢視這個頁面的權限!<br>your changes&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;您做的修正<br>your user name is "%1".&nbsp;&nbsp; &nbsp;wiki&nbsp;&nbsp; &nbsp;zh-tw&nbsp;&nbsp; &nbsp;您的帳號是 "%1"
+
== 利用 Cloudera 部署 Hadoop<br>  ==
 +
 
 +
=== 前言<br>  ===
 +
 
 +
Hadoop 是一个实现了 MapReduce 计算模型的开源分布式并行编程框架。MapReduce的概念来源于Google实验室,它是一个简化并行计算的编程模型,适用于大规模集群上的海量数据处理,目前最成功的应用是分布式搜索引擎。随着2007年底该模式Java开源实现项目Apache Hadoop的出现,使得程序员可以轻松地编写分布式并行程序,并将其运行于计算机集群上,完成海量数据的计算。近两年尤其是今年国内外采用MapReduce模型的应用也逐渐丰富起来,如像NTT KDDI和中国移动这类的电信公司采用该模型分析用户信息,优化网络配置;美国供电局采用该模型来分析电网现状;包括VISA和JP摩根在内的金融公司采用该模型来分析股票数据;包括Amazon和ebay在内的零售商和电子商务公司也开始采用该模型;甚至部分生物公司也采用该模型来进行DNA测序和分析。然而Hadoop安装、部署、管理的难度非常大,这使用很多用户对Hadoop望而却步,好在这种情况不久就得到了改善,Cloudera提供了非常简单的Hadoop的发布版本,能够十分方便地对Hadoop进行安装、部署和管理,这导致目前大约有75%的Hadoop新用户使用Cloudera。<br>
 +
 
 +
=== 规划<br>  ===
 +
 
 +
==== 运行模式  ====
 +
 
 +
Hadoop有三种运行模式:单机(非分布)运行模式、伪分布运行模式和分布式运行模式。其中前两种运行模式体现不了 Hadoop 分布式计算的优势,并没有什么实际意义(当然它们对程序的测试及调试还是很有帮助的),因此在这里还是采用实际环境中使用的分布式运行模式来部署。<br>
 +
 
 +
==== 主机规划  ====
 +
<pre>在这里拟采用三台主机搭建Hadoop环境,由于后期还需要测试增删主机及跨网段主机对Hadoop环境的影响,特将Hadoop主机规划如下:
 +
 
 +
Hadoop-01 10.137.253.201
 +
 
 +
Hadoop-02 10.137.253.202
 +
 
 +
Hadoop-03 10.137.253.203 准备后期加入的测试主机
 +
 
 +
Hadoop-04 10.137.253.204
 +
 
 +
Firehare-303 10.10.3.30  准备后期加入的跨网段测试主机
 +
</pre>
 +
==== Hadoop环境规划<br>  ====
 +
 
 +
对于Hadoop来说,最主要的是两个内容,一是分布式文件系统HDFS,一是MapReduce计算模型。在分布式文件系统HDFS看来,节点分为NameNode 和DataNode,其中NameNode只有一个,DataNode可以是很多;在MapReduce计算模型看来,节点又可分为JobTracker和 TaskTracker,其中JobTracker只有一个,TaskTracker可以是很多。因此在实际的Hadoop环境中通常有两台主节点,一台作为NameNode(I/O节点??),一台作为JobTracker(管理节点??),剩下的都是从节点,同时当做DataNode和TaskTracker使用。当然也可以将NameNode和JobTracker安装在一台主节点上。由于测试机数量有限,所以在这里是让Hadoop-01做为Namenode和Jobtracker,其它主机则作为DataNode和TaskTracker(如果Hadoop环境中主机数量很多的话,还是建议将Namenode和JobTracker部署到不同的主机,以提高计算的性能)。具体规划如下:<br>
 +
<pre>HDFS:
 +
 
 +
Hadoop-01 NameNode
 +
 
 +
Hadoop-02 DataNode
 +
 
 +
Hadoop-03 DataNode
 +
 
 +
Hadoop-04 DataNode
 +
 
 +
Firehare-303 DataNode
 +
 
 +
 
 +
MapReduce:
 +
 
 +
Hadoop-01 JobTracker
 +
 
 +
Hadoop-02 TaskTracker
 +
 
 +
Hadoop-03 TaskTracker
 +
 
 +
Hadoop-04 TaskTracker
 +
 
 +
Firehare-303 TaskTracker
 +
</pre>
 +
 
 +
=== 安装  ===
 +
 
 +
规划好了就开始安装Hadoop,如前言中所说使用Cloudera的Hadoop发布版安装Hadoop是十分方便的,首先当然是在每台主机上一个干净的操作系统(我用的是Ubuntu 8.04,用户设为Hadoop,其它的版本应该差不多),然后就是安装Hadoop了(这样安装的是Hadoop-0.20,也可以安装Hadoop-0.18的版本,反正安装步骤都差不多。注意,不能同时启用Hadoop-0.20和Hadoop-0.18)。由于每台机器安装步骤都一样,这里就写出了一台主机的安装步骤,主要分为以下几个步骤:<br>
 +
 
 +
==== 设置Cloudera的源  ====
 +
 
 +
*生成Cloudera源文件(这里采用的是Hadoop-0.20版本):
 +
<pre>sudo vi /etc/apt/sources.list.d/cloudera.list
 +
 
 +
#稳定版(Hadoop-0.18)
 +
#deb http://archive.cloudera.com/debian hardy-stable contrib
 +
#deb-src http://archive.cloudera.com/debian hardy-stable contrib
 +
 
 +
#测试版(Hadoop-0.20)
 +
deb http://archive.cloudera.com/debian hardy-testing contrib
 +
deb-src http://archive.cloudera.com/debian hardy-testing contrib
 +
</pre>
 +
*生成源的密钥:
 +
<pre>sudo apt-get install curl
 +
 
 +
curl -s http://archive.cloudera.com/debian/archive.key | sudo apt-key add -
 +
</pre>
 +
==== 安装Hadoop  ====
 +
 
 +
*更新源包索引:
 +
<pre>sudo apt-get update
 +
sudo apt-get dist-upgrade
 +
</pre>
 +
*安装Hadoop:
 +
<pre>sudo apt-get install hadoop-0.20 hadoop-0.20-conf-pseudo  </pre>
 +
 
 +
=== 部署  ===
 +
 
 +
安装好这几台主机的Hadoop环境之后,就要对它们进行分布式运行模式的部署了,首先是设置它们之间的互联。<br>
 +
 
 +
==== 主机互联<br>  ====
 +
 
 +
Hadoop环境中的互联是指各主机之间网络畅通,机器名与IP地址之间解析正常,可以从任一主机ping通其它主机的主机名。注意,这里指的是主机名,即在Hadoop-01主机上可以通过命令ping Hadoop-02来ping通Hadoop-02主机(同理,要求这几台主机都能相互Ping通各自的主机名)。可以通过在各主机的/etc/hosts文件来实现,具体设置如下:<br>
 +
<pre>sudo vi /etc/hosts
 +
 
 +
127.0.0.1 localhost
 +
10.x.253.201 hadoop-01 hadoop-01
 +
10.x.253.202 hadoop-02 hadoop-02
 +
10.x.253.203 hadoop-03 hadoop-03
 +
10.x.253.204 hadoop-04 hadoop-04
 +
10.x.3.30 firehare-303 firehare-303</pre>
 +
将每个主机的hosts文件都改成上述设置,这样就实现了主机间使用主机名互联的要求。<br>
 +
 
 +
<br>
 +
 
 +
注:如果深究起来,并不是所有的主机都需要知道Hadoop环境中其它主机主机名的。其实只是作为主节点的主机(如NameNode、JobTracker),需要在该主节点hosts文件中加上Hadoop环境中所有机器的IP地址及其对应的主机名,如果该台机器作Datanode用,则只需要在hosts文件中加上本机和主节点机器的IP地址与主机名即可(至于JobTracker主机是否也要同NameNode主机一样加上所有机器的IP和主机名,本人由于没有环境,不敢妄言,但猜想是要加的,如果哪位兄弟有兴趣,倒是不妨一试)。在这里只是由于要作测试,作为主节点的主机可能会改变,加上本人比较懒,所以就全加上了。:)
 +
 
 +
==== 计算帐号设置  ====
 +
 
 +
Hadoop要求所有机器上hadoop的部署目录结构要相同,并且都有一个相同用户名的帐户。由于这里采用的是Cloudera发布的Hadoop包,所以并不需要这方面的设置,大家了解一下即可。
 +
 
 +
==== SSH设置  ====
 +
 
 +
在 Hadoop 分布式环境中,主节点(NameNode、JobTracker) 需要通过 SSH 来启动和停止从节点(DataNode、TeskTracker)上的各类进程。因此需要保证环境中的各台机器均可以通过 SSH 登录访问,并且主节点用 SSH 登录从节点时,不需要输入密码,这样主节点才能在后台自如地控制其它结点。可以将各台机器上的 SSH 配置为使用无密码公钥认证方式来实现。 Ubuntu上的SSH协议的开源实现是OpenSSH, 缺省状态下是没有安装的,如需使用需要进行安装。<br>
 +
 
 +
===== 安装OpenSSH  =====
 +
 
 +
安装OpenSSH很简单,只需要下列命令就可以把openssh-client和openssh-server给安装好:
 +
<pre>sudo apt-get install ssh
 +
</pre>
 +
===== 设置OpenSSH的无密码公钥认证<br>  =====
 +
 
 +
首先在Hadoop-01机器上执行以下命令:<br>
 +
<pre>hadoop@hadoop-01:~$ ssh-keygen -t rsa
 +
Generating public/private rsa key pair.
 +
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
 +
Enter passphrase (empty for no passphrase):(在这里直接回车)
 +
Enter same passphrase again:(在这里直接回车)
 +
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
 +
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
 +
The key fingerprint is:
 +
9d:42:04:26:00:51:c7:4e:2f:7e:38:dd:93:1c:a2:d6 hadoop@hadoop-01</pre>
 +
上述命令将为主机hadoops-01上的当前用户hadoop生成其密钥对,该密钥对被保存在/home/hadoop/.ssh/id_rsa文件中,同时命令所生成的证书以及公钥也保存在该文件所在的目录中(在这里是:/home/hadoop/.ssh),并形成两个文件 id_rsa,id_rsa.pub。然后将 id_rsa.pub 文件的内容复制到每台主机(其中包括本机hadoop-01)的/home/hadoop/.ssh/authorized_keys文件的尾部,如果该文件不存在,可手工创建一个。
 +
 
 +
'''注意:id_rsa.pub 文件的内容是长长的一行,复制时不要遗漏字符或混入了多余换行符。'''<br>
 +
 
 +
===== 无密码公钥SSH的连接测试<br>  =====
 +
 
 +
从 hadoop-01 分别向 hadoop-01, hadoop-04, firehare-303 发起 SSH 连接请求,确保不需要输入密码就能 SSH 连接成功。注意第一次 SSH 连接时会出现类似如下提示的信息:
 +
<pre>The authenticity of host [hadoop-01] can't be established. The key fingerprint is:
 +
c8:c2:b2:d0:29:29:1a:e3:ec:d9:4a:47:98:29:b4:48 Are you sure you want to continue connecting (yes/no)?
 +
</pre>
 +
请输入 yes, 这样 OpenSSH 会把连接过来的这台主机的信息自动加到 /home/hadoop/.ssh/know_hosts 文件中去,第二次再连接时,就不会有这样的提示信息了。 <br>
 +
 
 +
==== 设置主节点的Hadoop<br>  ====
 +
 
 +
===== 设置JAVA_HOME<br>  =====
 +
 
 +
Hadoop的JAVA_HOME是在文件/etc/conf/hadoop-env.sh中设置,具体设置如下:<br>
 +
<pre>sudo vi /etc/conf/hadoop-env.sh
 +
 
 +
export JAVA_HOME="/usr/lib/jvm/java-6-sun"
 +
</pre>
 +
===== Hadoop的核心配置<br>  =====
 +
 
 +
Hadoop的核心配置文件是/etc/hadoop/conf/core-site.xml,具体配置如下:<br>
 +
<pre>&lt;?xml version="1.0"?&gt;
 +
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
 +
 
 +
&lt;configuration&gt;
 +
&lt;property&gt;
 +
&lt;name&gt;fs.default.name&lt;/name&gt;
 +
&lt;!--
 +
&lt;value&gt;hdfs://localhost:8020&lt;/value&gt;
 +
--&gt;
 +
&lt;value&gt;hdfs://hadoop-01:8020&lt;/value&gt;
 +
&lt;/property&gt;
 +
 
 +
&lt;property&gt;
 +
&lt;name&gt;hadoop.tmp.dir&lt;/name&gt;
 +
&lt;value&gt;/var/lib/hadoop-0.20/cache/${user.name}&lt;/value&gt;
 +
&lt;/property&gt;
 +
&lt;/configuration&gt;
 +
</pre>
 +
===== 设置Hadoop的分布式存储环境<br>  =====
 +
 
 +
Hadoop的分布式环境设置主要是通过文件/etc/hadoop/conf/hdfs-site.xml来实现的,具体配置如下:<br>
 +
<pre>&lt;?xml version="1.0"?&gt;
 +
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
 +
 
 +
&lt;configuration&gt;
 +
&lt;property&gt;
 +
&lt;name&gt;dfs.replication&lt;/name&gt;
 +
&lt;!--
 +
&lt;value&gt;1&lt;/value&gt;
 +
--&gt;
 +
&lt;value&gt;3&lt;/value&gt;
 +
&lt;/property&gt;
 +
&lt;property&gt;
 +
&lt;name&gt;dfs.permissions&lt;/name&gt;
 +
&lt;value&gt;false&lt;/value&gt;
 +
&lt;/property&gt;
 +
&lt;property&gt;
 +
&lt;!-- specify this so that running 'hadoop namenode -format' formats the right dir --&gt;
 +
&lt;name&gt;dfs.name.dir&lt;/name&gt;
 +
&lt;value&gt;/var/lib/hadoop-0.20/cache/hadoop/dfs/name&lt;/value&gt;
 +
&lt;/property&gt;
 +
&lt;/configuration&gt;
 +
</pre>
 +
===== 设置Hapoop的分布式计算环境<br>  =====
 +
 
 +
Hadoop的分布式计算是采用了Map/Reduce算法,该算法环境的设置主要是通过文件/etc/hadoop/conf/mapred-site.xml来实现的,具体配置如下:
 +
<pre>&lt;?xml version="1.0"?&gt;
 +
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
 +
 
 +
&lt;configuration&gt;
 +
&lt;property&gt;
 +
&lt;name&gt;mapred.job.tracker&lt;/name&gt;
 +
&lt;!--
 +
&lt;value&gt;localhost:8021&lt;/value&gt;
 +
--&gt;
 +
&lt;value&gt;hadoop-01:8021&lt;/value&gt;
 +
&lt;/property&gt;
 +
&lt;/configuration&gt;
 +
</pre>
 +
===== 设置Hadoop的主从节点<br>  =====
 +
 
 +
首先设置主节点,编辑/etc/hadoop/conf/masters文件,如下所示:<br>
 +
<pre>hadoop-01
 +
</pre>
 +
然后是设置从节点,编辑/etc/hadoop/conf/slaves文件,如下所示:<br>
 +
<pre>hadoop-02
 +
hadoop-03
 +
hadoop-04
 +
firehare-303
 +
</pre>
 +
 
 +
==== 设置从节点上的Hadoop<br>  ====
 +
 
 +
从节点上的Hadoop设置很简单,只需要将主节点上的Hadoop设置,复制一份到从节点上即可。<br>
 +
<pre>scp -r /etc/hadoop/conf hadoop-02:/etc/hadoop
 +
scp -r /etc/hadoop/conf hadoop-03:/etc/hadoop
 +
scp -r /etc/hadoop/conf hadoop-04:/etc/hadoop
 +
scp -r /etc/hadoop/conf firehare-303:/etc/hadoop
 +
</pre>
 +
=== 启动Hadoop<br>  ===
 +
 
 +
==== 格式化分布式文件系统  ====
 +
 
 +
在启动Hadoop之前还要做最后一个准备工作,那就是格式化分布式文件系统,这个只需要在主节点做就行了,具体如下: <br>
 +
<pre>/usr/lib/hadoop-0.20/bin/hadoop namenode -format
 +
</pre>
 +
==== 启动Hadoop服务<br>  ====
 +
 
 +
启动Hadoop可以通过以下命令来实现:
 +
<pre>/usr/lib/hadoop-0.20/bin/start-all.sh</pre>
 +
注意:该命令是没有加sudo的,如果加了sudo就会提示出错信息的,因为root用户并没有做无验证ssh设置。以下是输出信息,注意hadoop-03是故意没接的,所以出现No route to host信息。<br>
 +
<pre>hadoop@hadoop-01:~$ /usr/lib/hadoop-0.20/bin/start-all.sh
 +
namenode running as process 4836. Stop it first.
 +
hadoop-02: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-02.out
 +
hadoop-04: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-04.out
 +
firehare-303: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-usvr-303b.out
 +
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
 +
hadoop-01: secondarynamenode running as process 4891. Stop it first.
 +
jobtracker running as process 4787. Stop it first.
 +
hadoop-02: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-02.out
 +
hadoop-04: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-04.out
 +
firehare-303: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-usvr-303b.out
 +
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
 +
</pre>
 +
这样Hadoop就正常启动了!<br>
 +
 
 +
=== 测试Hadoop<br>  ===
 +
 
 +
Hadoop架设好了,接下来就是要对其进行测试,看看它是否能正常工作,具体代码如下:
 +
<pre>hadoop@hadoop-01:~$ hadoop-0.20 fs -mkdir input
 +
hadoop@hadoop-01:~$ hadoop-0.20 fs -put /etc/hadoop-0.20/conf/*.xml input
 +
hadoop@hadoop-01:~$ hadoop-0.20 fs -ls input
 +
Found 6 items
 +
-rw-r--r-- 3 hadoop supergroup 3936 2010-03-11 08:55 /user/hadoop/input/capacity-scheduler.xml
 +
-rw-r--r-- 3 hadoop supergroup 400 2010-03-11 08:55 /user/hadoop/input/core-site.xml
 +
-rw-r--r-- 3 hadoop supergroup 3032 2010-03-11 08:55 /user/hadoop/input/fair-scheduler.xml
 +
-rw-r--r-- 3 hadoop supergroup 4190 2010-03-11 08:55 /user/hadoop/input/hadoop-policy.xml
 +
-rw-r--r-- 3 hadoop supergroup 536 2010-03-11 08:55 /user/hadoop/input/hdfs-site.xml
 +
-rw-r--r-- 3 hadoop supergroup 266 2010-03-11 08:55 /user/hadoop/input/mapred-site.xml
 +
hadoop@hadoop-01:~$ hadoop-0.20 jar /usr/lib/hadoop-0.20/hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
 +
10/03/11 08:55:43 INFO mapred.FileInputFormat: Total input paths to process&nbsp;: 6
 +
10/03/11 08:55:44 INFO mapred.JobClient: Running job: job_201003110836_0001
 +
10/03/11 08:55:45 INFO mapred.JobClient: map 0% reduce 0%
 +
10/03/11 08:55:57 INFO mapred.JobClient: map 33% reduce 0%
 +
10/03/11 08:56:06 INFO mapred.JobClient: map 33% reduce 11%
 +
10/03/11 08:56:07 INFO mapred.JobClient: map 66% reduce 11%
 +
10/03/11 08:56:12 INFO mapred.JobClient: map 100% reduce 11%
 +
10/03/11 08:56:21 INFO mapred.JobClient: map 100% reduce 22%
 +
10/03/11 09:04:06 INFO mapred.JobClient: Task Id&nbsp;: attempt_201003110836_0001_m_000002_0, Status&nbsp;: FAILED
 +
Too many fetch-failures
 +
10/03/11 09:04:06 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
 +
10/03/11 09:04:06 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
 +
10/03/11 09:04:22 INFO mapred.JobClient: map 100% reduce 27%
 +
10/03/11 09:06:50 INFO mapred.JobClient: Task Id&nbsp;: attempt_201003110836_0001_m_000003_0, Status&nbsp;: FAILED
 +
Too many fetch-failures
 +
10/03/11 09:06:50 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
 +
10/03/11 09:06:50 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
 +
10/03/11 09:07:03 INFO mapred.JobClient: map 100% reduce 100%
 +
10/03/11 09:07:05 INFO mapred.JobClient: Job complete: job_201003110836_0001
 +
10/03/11 09:07:05 INFO mapred.JobClient: Counters: 18
 +
10/03/11 09:07:05 INFO mapred.JobClient: Job Counters
 +
10/03/11 09:07:05 INFO mapred.JobClient: Launched reduce tasks=1
 +
10/03/11 09:07:05 INFO mapred.JobClient: Launched map tasks=8
 +
10/03/11 09:07:05 INFO mapred.JobClient: Data-local map tasks=8
 +
10/03/11 09:07:05 INFO mapred.JobClient: FileSystemCounters
 +
10/03/11 09:07:05 INFO mapred.JobClient: FILE_BYTES_READ=100
 +
10/03/11 09:07:05 INFO mapred.JobClient: HDFS_BYTES_READ=12360
 +
10/03/11 09:07:05 INFO mapred.JobClient: FILE_BYTES_WRITTEN=422
 +
10/03/11 09:07:05 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=204
 +
10/03/11 09:07:05 INFO mapred.JobClient: Map-Reduce Framework
 +
10/03/11 09:07:05 INFO mapred.JobClient: Reduce input groups=4
 +
10/03/11 09:07:05 INFO mapred.JobClient: Combine output records=4
 +
10/03/11 09:07:05 INFO mapred.JobClient: Map input records=315
 +
10/03/11 09:07:05 INFO mapred.JobClient: Reduce shuffle bytes=49
 +
10/03/11 09:07:05 INFO mapred.JobClient: Reduce output records=4
 +
10/03/11 09:07:05 INFO mapred.JobClient: Spilled Records=8
 +
10/03/11 09:07:05 INFO mapred.JobClient: Map output bytes=86
 +
10/03/11 09:07:05 INFO mapred.JobClient: Map input bytes=12360
 +
10/03/11 09:07:05 INFO mapred.JobClient: Combine input records=4
 +
10/03/11 09:07:05 INFO mapred.JobClient: Map output records=4
 +
10/03/11 09:07:05 INFO mapred.JobClient: Reduce input records=4
 +
10/03/11 09:07:05 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
 +
10/03/11 09:07:05 INFO mapred.FileInputFormat: Total input paths to process&nbsp;: 1
 +
10/03/11 09:07:05 INFO mapred.JobClient: Running job: job_201003110836_0002
 +
10/03/11 09:07:06 INFO mapred.JobClient: map 0% reduce 0%
 +
10/03/11 09:07:13 INFO mapred.JobClient: map 100% reduce 0%
 +
10/03/11 09:07:19 INFO mapred.JobClient: map 100% reduce 100%
 +
10/03/11 09:07:21 INFO mapred.JobClient: Job complete: job_201003110836_0002
 +
10/03/11 09:07:21 INFO mapred.JobClient: Counters: 18
 +
10/03/11 09:07:21 INFO mapred.JobClient: Job Counters
 +
10/03/11 09:07:21 INFO mapred.JobClient: Launched reduce tasks=1
 +
10/03/11 09:07:21 INFO mapred.JobClient: Launched map tasks=1
 +
10/03/11 09:07:21 INFO mapred.JobClient: Data-local map tasks=1
 +
10/03/11 09:07:21 INFO mapred.JobClient: FileSystemCounters
 +
10/03/11 09:07:21 INFO mapred.JobClient: FILE_BYTES_READ=100
 +
10/03/11 09:07:21 INFO mapred.JobClient: HDFS_BYTES_READ=204
 +
10/03/11 09:07:21 INFO mapred.JobClient: FILE_BYTES_WRITTEN=232
 +
10/03/11 09:07:21 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=62
 +
10/03/11 09:07:21 INFO mapred.JobClient: Map-Reduce Framework
 +
10/03/11 09:07:21 INFO mapred.JobClient: Reduce input groups=1
 +
10/03/11 09:07:21 INFO mapred.JobClient: Combine output records=0
 +
10/03/11 09:07:21 INFO mapred.JobClient: Map input records=4
 +
10/03/11 09:07:21 INFO mapred.JobClient: Reduce shuffle bytes=0
 +
10/03/11 09:07:21 INFO mapred.JobClient: Reduce output records=4
 +
10/03/11 09:07:21 INFO mapred.JobClient: Spilled Records=8
 +
10/03/11 09:07:21 INFO mapred.JobClient: Map output bytes=86
 +
10/03/11 09:07:21 INFO mapred.JobClient: Map input bytes=118
 +
10/03/11 09:07:21 INFO mapred.JobClient: Combine input records=0
 +
10/03/11 09:07:21 INFO mapred.JobClient: Map output records=4
 +
10/03/11 09:07:21 INFO mapred.JobClient: Reduce input records=4
 +
</pre>
 +
不难看出,上述测试已经成功,这说明Hadoop部署成功,能够在上面进行Map/Reduce分布性计算了。

2010年3月11日 (四) 09:55的最新版本

利用 Cloudera 部署 Hadoop

前言

Hadoop 是一个实现了 MapReduce 计算模型的开源分布式并行编程框架。MapReduce的概念来源于Google实验室,它是一个简化并行计算的编程模型,适用于大规模集群上的海量数据处理,目前最成功的应用是分布式搜索引擎。随着2007年底该模式Java开源实现项目Apache Hadoop的出现,使得程序员可以轻松地编写分布式并行程序,并将其运行于计算机集群上,完成海量数据的计算。近两年尤其是今年国内外采用MapReduce模型的应用也逐渐丰富起来,如像NTT KDDI和中国移动这类的电信公司采用该模型分析用户信息,优化网络配置;美国供电局采用该模型来分析电网现状;包括VISA和JP摩根在内的金融公司采用该模型来分析股票数据;包括Amazon和ebay在内的零售商和电子商务公司也开始采用该模型;甚至部分生物公司也采用该模型来进行DNA测序和分析。然而Hadoop安装、部署、管理的难度非常大,这使用很多用户对Hadoop望而却步,好在这种情况不久就得到了改善,Cloudera提供了非常简单的Hadoop的发布版本,能够十分方便地对Hadoop进行安装、部署和管理,这导致目前大约有75%的Hadoop新用户使用Cloudera。

规划

运行模式

Hadoop有三种运行模式:单机(非分布)运行模式、伪分布运行模式和分布式运行模式。其中前两种运行模式体现不了 Hadoop 分布式计算的优势,并没有什么实际意义(当然它们对程序的测试及调试还是很有帮助的),因此在这里还是采用实际环境中使用的分布式运行模式来部署。

主机规划

在这里拟采用三台主机搭建Hadoop环境,由于后期还需要测试增删主机及跨网段主机对Hadoop环境的影响,特将Hadoop主机规划如下:

Hadoop-01 10.137.253.201

Hadoop-02 10.137.253.202

Hadoop-03 10.137.253.203 准备后期加入的测试主机

Hadoop-04 10.137.253.204

Firehare-303 10.10.3.30  准备后期加入的跨网段测试主机

Hadoop环境规划

对于Hadoop来说,最主要的是两个内容,一是分布式文件系统HDFS,一是MapReduce计算模型。在分布式文件系统HDFS看来,节点分为NameNode 和DataNode,其中NameNode只有一个,DataNode可以是很多;在MapReduce计算模型看来,节点又可分为JobTracker和 TaskTracker,其中JobTracker只有一个,TaskTracker可以是很多。因此在实际的Hadoop环境中通常有两台主节点,一台作为NameNode(I/O节点??),一台作为JobTracker(管理节点??),剩下的都是从节点,同时当做DataNode和TaskTracker使用。当然也可以将NameNode和JobTracker安装在一台主节点上。由于测试机数量有限,所以在这里是让Hadoop-01做为Namenode和Jobtracker,其它主机则作为DataNode和TaskTracker(如果Hadoop环境中主机数量很多的话,还是建议将Namenode和JobTracker部署到不同的主机,以提高计算的性能)。具体规划如下:

HDFS:

Hadoop-01 NameNode

Hadoop-02 DataNode

Hadoop-03 DataNode

Hadoop-04 DataNode

Firehare-303 DataNode


MapReduce:

Hadoop-01 JobTracker

Hadoop-02 TaskTracker

Hadoop-03 TaskTracker

Hadoop-04 TaskTracker

Firehare-303 TaskTracker

安装

规划好了就开始安装Hadoop,如前言中所说使用Cloudera的Hadoop发布版安装Hadoop是十分方便的,首先当然是在每台主机上一个干净的操作系统(我用的是Ubuntu 8.04,用户设为Hadoop,其它的版本应该差不多),然后就是安装Hadoop了(这样安装的是Hadoop-0.20,也可以安装Hadoop-0.18的版本,反正安装步骤都差不多。注意,不能同时启用Hadoop-0.20和Hadoop-0.18)。由于每台机器安装步骤都一样,这里就写出了一台主机的安装步骤,主要分为以下几个步骤:

设置Cloudera的源

  • 生成Cloudera源文件(这里采用的是Hadoop-0.20版本):
sudo vi /etc/apt/sources.list.d/cloudera.list

#稳定版(Hadoop-0.18)
#deb http://archive.cloudera.com/debian hardy-stable contrib
#deb-src http://archive.cloudera.com/debian hardy-stable contrib

#测试版(Hadoop-0.20)
deb http://archive.cloudera.com/debian hardy-testing contrib
deb-src http://archive.cloudera.com/debian hardy-testing contrib
  • 生成源的密钥:
sudo apt-get install curl

curl -s http://archive.cloudera.com/debian/archive.key | sudo apt-key add - 

安装Hadoop

  • 更新源包索引:
sudo apt-get update
sudo apt-get dist-upgrade
  • 安装Hadoop:
sudo apt-get install hadoop-0.20 hadoop-0.20-conf-pseudo  

部署

安装好这几台主机的Hadoop环境之后,就要对它们进行分布式运行模式的部署了,首先是设置它们之间的互联。

主机互联

Hadoop环境中的互联是指各主机之间网络畅通,机器名与IP地址之间解析正常,可以从任一主机ping通其它主机的主机名。注意,这里指的是主机名,即在Hadoop-01主机上可以通过命令ping Hadoop-02来ping通Hadoop-02主机(同理,要求这几台主机都能相互Ping通各自的主机名)。可以通过在各主机的/etc/hosts文件来实现,具体设置如下:

sudo vi /etc/hosts

127.0.0.1 localhost
10.x.253.201 hadoop-01 hadoop-01
10.x.253.202 hadoop-02 hadoop-02
10.x.253.203 hadoop-03 hadoop-03
10.x.253.204 hadoop-04 hadoop-04
10.x.3.30 firehare-303 firehare-303

将每个主机的hosts文件都改成上述设置,这样就实现了主机间使用主机名互联的要求。


注:如果深究起来,并不是所有的主机都需要知道Hadoop环境中其它主机主机名的。其实只是作为主节点的主机(如NameNode、JobTracker),需要在该主节点hosts文件中加上Hadoop环境中所有机器的IP地址及其对应的主机名,如果该台机器作Datanode用,则只需要在hosts文件中加上本机和主节点机器的IP地址与主机名即可(至于JobTracker主机是否也要同NameNode主机一样加上所有机器的IP和主机名,本人由于没有环境,不敢妄言,但猜想是要加的,如果哪位兄弟有兴趣,倒是不妨一试)。在这里只是由于要作测试,作为主节点的主机可能会改变,加上本人比较懒,所以就全加上了。:)

计算帐号设置

Hadoop要求所有机器上hadoop的部署目录结构要相同,并且都有一个相同用户名的帐户。由于这里采用的是Cloudera发布的Hadoop包,所以并不需要这方面的设置,大家了解一下即可。

SSH设置

在 Hadoop 分布式环境中,主节点(NameNode、JobTracker) 需要通过 SSH 来启动和停止从节点(DataNode、TeskTracker)上的各类进程。因此需要保证环境中的各台机器均可以通过 SSH 登录访问,并且主节点用 SSH 登录从节点时,不需要输入密码,这样主节点才能在后台自如地控制其它结点。可以将各台机器上的 SSH 配置为使用无密码公钥认证方式来实现。 Ubuntu上的SSH协议的开源实现是OpenSSH, 缺省状态下是没有安装的,如需使用需要进行安装。

安装OpenSSH

安装OpenSSH很简单,只需要下列命令就可以把openssh-client和openssh-server给安装好:

sudo apt-get install ssh
设置OpenSSH的无密码公钥认证

首先在Hadoop-01机器上执行以下命令:

hadoop@hadoop-01:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):(在这里直接回车)
Enter same passphrase again:(在这里直接回车)
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
9d:42:04:26:00:51:c7:4e:2f:7e:38:dd:93:1c:a2:d6 hadoop@hadoop-01

上述命令将为主机hadoops-01上的当前用户hadoop生成其密钥对,该密钥对被保存在/home/hadoop/.ssh/id_rsa文件中,同时命令所生成的证书以及公钥也保存在该文件所在的目录中(在这里是:/home/hadoop/.ssh),并形成两个文件 id_rsa,id_rsa.pub。然后将 id_rsa.pub 文件的内容复制到每台主机(其中包括本机hadoop-01)的/home/hadoop/.ssh/authorized_keys文件的尾部,如果该文件不存在,可手工创建一个。

注意:id_rsa.pub 文件的内容是长长的一行,复制时不要遗漏字符或混入了多余换行符。

无密码公钥SSH的连接测试

从 hadoop-01 分别向 hadoop-01, hadoop-04, firehare-303 发起 SSH 连接请求,确保不需要输入密码就能 SSH 连接成功。注意第一次 SSH 连接时会出现类似如下提示的信息:

The authenticity of host [hadoop-01] can't be established. The key fingerprint is: 
c8:c2:b2:d0:29:29:1a:e3:ec:d9:4a:47:98:29:b4:48 Are you sure you want to continue connecting (yes/no)?

请输入 yes, 这样 OpenSSH 会把连接过来的这台主机的信息自动加到 /home/hadoop/.ssh/know_hosts 文件中去,第二次再连接时,就不会有这样的提示信息了。

设置主节点的Hadoop

设置JAVA_HOME

Hadoop的JAVA_HOME是在文件/etc/conf/hadoop-env.sh中设置,具体设置如下:

sudo vi /etc/conf/hadoop-env.sh

export JAVA_HOME="/usr/lib/jvm/java-6-sun"
Hadoop的核心配置

Hadoop的核心配置文件是/etc/hadoop/conf/core-site.xml,具体配置如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>fs.default.name</name>
<!--
<value>hdfs://localhost:8020</value>
-->
<value>hdfs://hadoop-01:8020</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop-0.20/cache/${user.name}</value>
</property>
</configuration>
设置Hadoop的分布式存储环境

Hadoop的分布式环境设置主要是通过文件/etc/hadoop/conf/hdfs-site.xml来实现的,具体配置如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>dfs.replication</name>
<!--
<value>1</value>
-->
<value>3</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<!-- specify this so that running 'hadoop namenode -format' formats the right dir -->
<name>dfs.name.dir</name>
<value>/var/lib/hadoop-0.20/cache/hadoop/dfs/name</value>
</property>
</configuration>
设置Hapoop的分布式计算环境

Hadoop的分布式计算是采用了Map/Reduce算法,该算法环境的设置主要是通过文件/etc/hadoop/conf/mapred-site.xml来实现的,具体配置如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<property>
<name>mapred.job.tracker</name>
<!--
<value>localhost:8021</value>
-->
<value>hadoop-01:8021</value>
</property>
</configuration>
设置Hadoop的主从节点

首先设置主节点,编辑/etc/hadoop/conf/masters文件,如下所示:

hadoop-01

然后是设置从节点,编辑/etc/hadoop/conf/slaves文件,如下所示:

hadoop-02
hadoop-03
hadoop-04
firehare-303

设置从节点上的Hadoop

从节点上的Hadoop设置很简单,只需要将主节点上的Hadoop设置,复制一份到从节点上即可。

scp -r /etc/hadoop/conf hadoop-02:/etc/hadoop
scp -r /etc/hadoop/conf hadoop-03:/etc/hadoop
scp -r /etc/hadoop/conf hadoop-04:/etc/hadoop
scp -r /etc/hadoop/conf firehare-303:/etc/hadoop

启动Hadoop

格式化分布式文件系统

在启动Hadoop之前还要做最后一个准备工作,那就是格式化分布式文件系统,这个只需要在主节点做就行了,具体如下:

/usr/lib/hadoop-0.20/bin/hadoop namenode -format

启动Hadoop服务

启动Hadoop可以通过以下命令来实现:

/usr/lib/hadoop-0.20/bin/start-all.sh

注意:该命令是没有加sudo的,如果加了sudo就会提示出错信息的,因为root用户并没有做无验证ssh设置。以下是输出信息,注意hadoop-03是故意没接的,所以出现No route to host信息。

hadoop@hadoop-01:~$ /usr/lib/hadoop-0.20/bin/start-all.sh
namenode running as process 4836. Stop it first.
hadoop-02: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-02.out
hadoop-04: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-hadoop-04.out
firehare-303: starting datanode, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-datanode-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host
hadoop-01: secondarynamenode running as process 4891. Stop it first.
jobtracker running as process 4787. Stop it first.
hadoop-02: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-02.out
hadoop-04: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-hadoop-04.out
firehare-303: starting tasktracker, logging to /usr/lib/hadoop-0.20/bin/../logs/hadoop-hadoop-tasktracker-usvr-303b.out
hadoop-03: ssh: connect to host hadoop-03 port 22: No route to host

这样Hadoop就正常启动了!

测试Hadoop

Hadoop架设好了,接下来就是要对其进行测试,看看它是否能正常工作,具体代码如下:

hadoop@hadoop-01:~$ hadoop-0.20 fs -mkdir input
hadoop@hadoop-01:~$ hadoop-0.20 fs -put /etc/hadoop-0.20/conf/*.xml input
hadoop@hadoop-01:~$ hadoop-0.20 fs -ls input
Found 6 items
-rw-r--r-- 3 hadoop supergroup 3936 2010-03-11 08:55 /user/hadoop/input/capacity-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 400 2010-03-11 08:55 /user/hadoop/input/core-site.xml
-rw-r--r-- 3 hadoop supergroup 3032 2010-03-11 08:55 /user/hadoop/input/fair-scheduler.xml
-rw-r--r-- 3 hadoop supergroup 4190 2010-03-11 08:55 /user/hadoop/input/hadoop-policy.xml
-rw-r--r-- 3 hadoop supergroup 536 2010-03-11 08:55 /user/hadoop/input/hdfs-site.xml
-rw-r--r-- 3 hadoop supergroup 266 2010-03-11 08:55 /user/hadoop/input/mapred-site.xml
hadoop@hadoop-01:~$ hadoop-0.20 jar /usr/lib/hadoop-0.20/hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
10/03/11 08:55:43 INFO mapred.FileInputFormat: Total input paths to process : 6
10/03/11 08:55:44 INFO mapred.JobClient: Running job: job_201003110836_0001
10/03/11 08:55:45 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 08:55:57 INFO mapred.JobClient: map 33% reduce 0%
10/03/11 08:56:06 INFO mapred.JobClient: map 33% reduce 11%
10/03/11 08:56:07 INFO mapred.JobClient: map 66% reduce 11%
10/03/11 08:56:12 INFO mapred.JobClient: map 100% reduce 11%
10/03/11 08:56:21 INFO mapred.JobClient: map 100% reduce 22%
10/03/11 09:04:06 INFO mapred.JobClient: Task Id : attempt_201003110836_0001_m_000002_0, Status : FAILED
Too many fetch-failures
10/03/11 09:04:06 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
10/03/11 09:04:06 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
10/03/11 09:04:22 INFO mapred.JobClient: map 100% reduce 27%
10/03/11 09:06:50 INFO mapred.JobClient: Task Id : attempt_201003110836_0001_m_000003_0, Status : FAILED
Too many fetch-failures
10/03/11 09:06:50 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
10/03/11 09:06:50 WARN mapred.JobClient: Error reading task outputusvr-303b.cmet.wzu.edu.cn
10/03/11 09:07:03 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 09:07:05 INFO mapred.JobClient: Job complete: job_201003110836_0001
10/03/11 09:07:05 INFO mapred.JobClient: Counters: 18
10/03/11 09:07:05 INFO mapred.JobClient: Job Counters
10/03/11 09:07:05 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 09:07:05 INFO mapred.JobClient: Launched map tasks=8
10/03/11 09:07:05 INFO mapred.JobClient: Data-local map tasks=8
10/03/11 09:07:05 INFO mapred.JobClient: FileSystemCounters
10/03/11 09:07:05 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 09:07:05 INFO mapred.JobClient: HDFS_BYTES_READ=12360
10/03/11 09:07:05 INFO mapred.JobClient: FILE_BYTES_WRITTEN=422
10/03/11 09:07:05 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=204
10/03/11 09:07:05 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 09:07:05 INFO mapred.JobClient: Reduce input groups=4
10/03/11 09:07:05 INFO mapred.JobClient: Combine output records=4
10/03/11 09:07:05 INFO mapred.JobClient: Map input records=315
10/03/11 09:07:05 INFO mapred.JobClient: Reduce shuffle bytes=49
10/03/11 09:07:05 INFO mapred.JobClient: Reduce output records=4
10/03/11 09:07:05 INFO mapred.JobClient: Spilled Records=8
10/03/11 09:07:05 INFO mapred.JobClient: Map output bytes=86
10/03/11 09:07:05 INFO mapred.JobClient: Map input bytes=12360
10/03/11 09:07:05 INFO mapred.JobClient: Combine input records=4
10/03/11 09:07:05 INFO mapred.JobClient: Map output records=4
10/03/11 09:07:05 INFO mapred.JobClient: Reduce input records=4
10/03/11 09:07:05 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
10/03/11 09:07:05 INFO mapred.FileInputFormat: Total input paths to process : 1
10/03/11 09:07:05 INFO mapred.JobClient: Running job: job_201003110836_0002
10/03/11 09:07:06 INFO mapred.JobClient: map 0% reduce 0%
10/03/11 09:07:13 INFO mapred.JobClient: map 100% reduce 0%
10/03/11 09:07:19 INFO mapred.JobClient: map 100% reduce 100%
10/03/11 09:07:21 INFO mapred.JobClient: Job complete: job_201003110836_0002
10/03/11 09:07:21 INFO mapred.JobClient: Counters: 18
10/03/11 09:07:21 INFO mapred.JobClient: Job Counters
10/03/11 09:07:21 INFO mapred.JobClient: Launched reduce tasks=1
10/03/11 09:07:21 INFO mapred.JobClient: Launched map tasks=1
10/03/11 09:07:21 INFO mapred.JobClient: Data-local map tasks=1
10/03/11 09:07:21 INFO mapred.JobClient: FileSystemCounters
10/03/11 09:07:21 INFO mapred.JobClient: FILE_BYTES_READ=100
10/03/11 09:07:21 INFO mapred.JobClient: HDFS_BYTES_READ=204
10/03/11 09:07:21 INFO mapred.JobClient: FILE_BYTES_WRITTEN=232
10/03/11 09:07:21 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=62
10/03/11 09:07:21 INFO mapred.JobClient: Map-Reduce Framework
10/03/11 09:07:21 INFO mapred.JobClient: Reduce input groups=1
10/03/11 09:07:21 INFO mapred.JobClient: Combine output records=0
10/03/11 09:07:21 INFO mapred.JobClient: Map input records=4
10/03/11 09:07:21 INFO mapred.JobClient: Reduce shuffle bytes=0
10/03/11 09:07:21 INFO mapred.JobClient: Reduce output records=4
10/03/11 09:07:21 INFO mapred.JobClient: Spilled Records=8
10/03/11 09:07:21 INFO mapred.JobClient: Map output bytes=86
10/03/11 09:07:21 INFO mapred.JobClient: Map input bytes=118
10/03/11 09:07:21 INFO mapred.JobClient: Combine input records=0
10/03/11 09:07:21 INFO mapred.JobClient: Map output records=4
10/03/11 09:07:21 INFO mapred.JobClient: Reduce input records=4

不难看出,上述测试已经成功,这说明Hadoop部署成功,能够在上面进行Map/Reduce分布性计算了。