1. 概述
創(chuàng)新互聯(lián)建站-專業(yè)網(wǎng)站定制、快速模板網(wǎng)站建設(shè)、高性價(jià)比云縣網(wǎng)站開發(fā)、企業(yè)建站全套包干低至880元,成熟完善的模板庫,直接使用。一站式云縣網(wǎng)站制作公司更省心,省錢,快速模板網(wǎng)站建設(shè)找我們,業(yè)務(wù)覆蓋云縣地區(qū)。費(fèi)用合理售后完善,十多年實(shí)體公司更值得信賴。大數(shù)據(jù)tensorflowonspark 進(jìn)行安裝和測(cè)試。
2 .環(huán)境
所選操作系統(tǒng) | 地址和軟件版本 | 節(jié)點(diǎn)類型 |
Centos7.3 64位 | 192.168.2.31(master) Java:jdk 1.8 Scala:2.10.4 Hadoop:2.7.3 Spark:2.12.3 TensorFlowOnSpark:0.8.0 Python2.7 | Master |
Centos7.3 64位 | 192.168.2.32(spark worker) Java:jdk 1.8 Hadoop:2.7.3 Spark:2.12.3 | slave001 |
Centos7.3 64位 | 192.168.2.33(spark worker) Java:jdk 1.8 Hadoop:2.7.3 Spark:2.12.3 | slave002 |
3 .安裝
1.1 刪除系統(tǒng)自帶jdk:
# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.99-2.6.5.1.el6.x86_64 rpm -e --nodeps java-1.6.0-openjdk-1.6.0.38-1.13.10.4.el6.x86_64 rpm -e --nodeps tzdata-java-2016c-1.el6.noarch
1.2 安裝jdk
rpm -ivh jdk-8u144-linux-x64.rpm
1.3添加java路徑
export JAVA_HOME=/usr/java/jdk1.8.0_144
1.4 驗(yàn)證java
[root@master opt]# java -version java version "1.8.0_144" Java(TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
1.5 Ssh免登陸設(shè)置
cd /root/.ssh/ ssh-keygen -t rsa cat id_rsa.pub >> authorized_keys scp id_rsa.pub authorized_keys root@192.168.2.32:/root/.ssh/ scp id_rsa.pub authorized_keys root@192.168.2.31:/root/.ssh/
1.6安裝python2.7和pip
yum install -y gcc wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgz tar vxf Python-2.7.13.tgz cd Python-2.7.13.tgz ./configure --prefix=/usr/local make && make install [root@master opt]# python Python 2.7.13 (default, Aug 24 2017, 16:10:35) [GCC 4.4.7 20120313 (Red Hat 4.4.7-18)] on linux2 Type "help", "copyright", "credits" or "license" for more information.
1.7 安裝pip和setuptools
tar zxvf pip-1.5.4.tar.gz tar zxvf setuptools-2.0.tar.gz cd setuptools-2.0 python setup.py install cd pip-1.5.4 python setup.py install
1.8 Hadoop安裝和配置
1.8.1 三臺(tái)機(jī)器都要安裝Hadoop
tar zxvf hadoop-2.7.3.tar.gz -C /usr/local/ cd /usr/local/hadoop-2.7.3/bin [root@master bin]# ./hadoop version Hadoop 2.7.3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff Compiled by root on 2016-08-18T01:41Z Compiled with protoc 2.5.0 From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4 This command was run using /usr/local/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
1.8.2 配置hadoop
配置master vi /usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://master:9001</value> </property> </configuration>
配置slave
[root@slave001 hadoop-2.7.3]# vi ./etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://slave001:9001</value> </property> </configuration>
[root@slave002 hadoop-2.7.3]# vi ./etc/hadoop/core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://slave002:9001</value> </property> </configuration>
1.8.3 配置hdfs
vi /usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/tmp/dfs/data</value> </property> <property> <name>dfs.namenode.rpc-address</name> <value>master:9001</value> </property> </configuration>
1.9 安裝scala
tar -zxvf scala-2.12.3.tgz -C /usr/local/ #修改變量添加scala vi /etc/profile export SCALA_HOME=/usr/local/scala-2.12.3/ export PATH=$PATH:/usr/local/scala-2.12.3/bin source /etc/profile
2.0三臺(tái)機(jī)器都要安裝spark
tar -zxvf spark-2.1.1-bin-hadoop2.7.tgz -C /usr/local/ vi /etc/profile export JAVA_HOME=/usr/java/jdk1.8.0_144/ export SCALA_HOME=/usr/local/scala-2.12.3/ export PATH=$PATH:/usr/local/scala-2.12.3/bin export SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/ export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin source /etc/profile
修改spark配置
cd /usr/local/spark-2.1.1-bin-hadoop2.7/
vi ./conf/spark-env.sh.template
export JAVA_HOME=/usr/java/jdk1.8.0_144/
export SCALA_HOME=/usr/local/scala-2.12.3/
#export SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/
export SPARK_MASTER_IP=192.168.2.31
export SPARK_WORKER_MEMORY=1g
export HADOOP_CONF_DIR=/usr/local/hadoop-2.7.3/etc/hadoop
export HADOOP_HDFS_HOME=/usr/local/hadoop-2.7.3/
export SPARK_DRIVER_MEMORY=1g
保存退出
mv spark-env.sh.template spark-env.sh
#修改slaves
[root@master conf]# vi slaves.template
192.168.2.32
192.168.2.33
[root@master conf]# mv slaves.template slaves
2.1 三臺(tái)主機(jī)上修改hosts
vi /etc/hosts
192.168.2.31 master
192.168.2.32 slave001
192.168.2.33 slave002
4. 啟動(dòng)服務(wù)
[root@master local]# cd hadoop-2.7.3/sbin/
修改配置文件vi /usr/local/hadoop-2.7.3/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_144/
./start-all.sh
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: Error: JAVA_HOME is not set and could not be found.
修改配置文件
vi /usr/local/hadoop-2.7.3/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_144/
重新啟動(dòng)服務(wù)
sbin/start-all.sh
#啟動(dòng)spark
cd /usr/local/spark-2.1.1-bin-hadoop2.7/sbin/
./start-all.sh
4. 安裝tensorflow
前提下先安裝cuda vim /etc/yum.repos.d/linuxtech.testing.repo 添加內(nèi)容: [cpp] view plain copy [linuxtech-testing] name=LinuxTECH Testing baseurl=http://pkgrepo.linuxtech.net/el6/testing/ enabled=0 gpgcheck=1 gpgkey=http://pkgrepo.linuxtech.net/el6/release/RPM-GPG-KEY-LinuxTECH.NET sudo rpm -i cuda-repo-rhel6-8.0.61-1.x86_64.rpm sudo yum clean all sudo yum install cuda rpm -ivh --nodeps dkms-2.1.1.2-1.el6.rf.noarch.rpm yum install cuda yum install epel-release yum install -y zlib* #軟連接cuda ln -s /usr/local/cuda-8.0 /usr/local/cudaldconfig /usr/local/cuda/lib64 Vi /etc/profile export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda
更新pip pip install --upgrade pip 下載tensorflow pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl 安裝好后 #python >>> import tensorflow Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in <module> from tensorflow.python import * File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 45, in <module> from tensorflow.python import pywrap_tensorflow File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module> _pywrap_tensorflow = swig_import_helper() File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description) ImportError: libcudart.so.7.5: cannot open shared object file: No such file or directory
#這是因?yàn)閘ib庫不完整 yum install openssl -y yum install openssl-devel -y yum install gcc gcc-c++ gcc* #更新pip install --upgrade pip pip install --upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl
>>> import tensorflow Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in <module> from tensorflow.python import * File "/usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 45, in <module> from tensorflow.python import pywrap_tensorflow File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28, in <module> _pywrap_tensorflow = swig_import_helper() File "/usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description) ImportError: /lib64/libc.so.6: version `GLIBC_2.15' not found (required by /usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so) #這是因?yàn)閠ensorflow 使用的glibc版本庫太高,系統(tǒng)自帶太低了。 可以使用。
# strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_FORCE_NEW
GLIBCXX_DEBUG_MESSAGE_LENGTH
放入最新的glibc庫,解壓出6.0.20
libstdc++.so.6.0.20 覆蓋原來的libstdc++.so.6
[root@master 4.4.7]# ln -s /opt/libstdc++.so.6/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6
ln: creating symbolic link `/usr/lib64/libstdc++.so.6': File exists
[root@master 4.4.7]# mv /usr/lib64/libstdc++.so.6 /root/
[root@master 4.4.7]# ln -s /opt/libstdc++.so.6/libstdc++.so.6.0.20 /usr/lib64/libstdc++.so.6
[root@master 4.4.7]# strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX
[root@master ~]# strings /usr/lib64/libstdc++.so.6 | grep GLIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_3.4.14
GLIBCXX_3.4.15
GLIBCXX_3.4.16
GLIBCXX_3.4.17
GLIBCXX_3.4.18
GLIBCXX_3.4.19
GLIBCXX_3.4.20
GLIBCXX_DEBUG_MESSAGE_LENGTH
這個(gè)地方特別要注意坑特別多,一定要覆蓋原來的。
pip install tensorflowonspark
這樣就可以使用了
報(bào)錯(cuò)信息:
報(bào)錯(cuò):ImportError: /lib64/libc.so.6: version `GLIBC_2.17' not found (required by /usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)
tar zxvf glibc-2.17.tar.gz
mkdir build
cd build
../glibc-2.17/configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin
make -j4
make install
測(cè)試驗(yàn)證tensorflow
import tensorflow as tf import numpy as np x_data = np.float32(np.random.rand(2, 100)) y_data = np.dot([0.100, 0.200], x_data) + 0.300 b = tf.Variable(tf.zeros([1])) W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0)) y = tf.matmul(W, x_data) + b loss = tf.reduce_mean(tf.square(y - y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for step in xrange(0, 201): sess.run(train) if step % 20 == 0: print step, sess.run(W), sess.run(b) # 得到最佳擬合結(jié)果 W: [[0.100 0.200]], b: [0.300]
確保etc/profile export JAVA_HOME=/usr/java/jdk1.8.0_144/ export SCALA_HOME=/usr/local/scala-2.12.3/ export PATH=$PATH:/usr/local/scala-2.12.3/bin export SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/ export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONPATH
完成實(shí)驗(yàn)。
下載地址:http://down.51cto.com/data/2338827
另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)scvps.cn,海內(nèi)外云服務(wù)器15元起步,三天無理由+7*72小時(shí)售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國(guó)服務(wù)器、虛擬主機(jī)、免備案服務(wù)器”等云主機(jī)租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡(jiǎn)單易用、服務(wù)可用性高、性價(jià)比高”等特點(diǎn)與優(yōu)勢(shì),專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場(chǎng)景需求。
當(dāng)前標(biāo)題:大數(shù)據(jù)TensorFlowOnSpark安裝-創(chuàng)新互聯(lián)
文章分享:http://www.rwnh.cn/article16/pcpgg.html
成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站制作、標(biāo)簽優(yōu)化、ChatGPT、建站公司、動(dòng)態(tài)網(wǎng)站、App設(shè)計(jì)
聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)
猜你還喜歡下面的內(nèi)容