前提是 hive客户端和zk没在一台机器,hive只是作为客户端使用,没有和hadoop集群在一起。
hive 和 hbase 整合(integration) 的时候,在创建hive关联Hbase表的时候出现如下 zk始终连接localhost:2181,
建表语句如下:
---------------------------------------------------------------------------------------------------
create external table h_table_user3(key int,
name string,age int,city string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES('hbase.columns.mapping'=':key,info:name,info:age,address:city') TBLPROPERTIES ("hbase.table.name" = "user");
-----------------------------------------------------------------------------------------------------
hive目录 conf下的hive-site.xml里面和zk相关的全部都已经改过后还是无效。
----------------------------------------------------------------------------------------------------------
<property>
<name>hive.cluster.delegation.token.store.zookeeper.connectString</name>
<value>l-hdfsgl2.bi.prod.cn1:2181</value>
<description>The ZooKeeper token store connect string.</description>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>l-hdfsgl1.bi.prod.cn1,l-hdfsgl2.bi.prod.cn1,l-hdfscc1.bi.prod.cn1</value>
<description>The list of zookeeper servers to talk to. This is only needed for read/write locks.</description>
</property>
<property>
<name>hive.zookeeper.client.port</name>
<value>2181</value>
<description>The port of zookeeper servers to talk to. This is only needed for read/write locks.</description>
</property>
--------------------------------------------------------------------------------------------------------------------------
以上3个配置全部配置了还是依旧出现连接 localhost:2181
...............................................................................................................................................................
b/stax-api-1.0.1.jar:/usr/local/hive-0.12.0-cdh5.1.2/lib/stringtemplate-3.2.1.jar:/usr/local/hive-0.12.0-cdh5.1.2/lib/tempus-fugit-1.1.jar:/usr/local/hive-0.12.0-cdh5.1.2/lib/velocity-1.5.jar:/usr/local/hive-0.12.0-cdh5.1.2/lib/xz-1.0.jar:/usr/local/hive-0.12.0-cdh5.1.2/lib/zookeeper-3.4.5-cdh5.1.2.jar::/usr/local/hadoop-2.3.0-cdh5.1.2/contrib/capacity-scheduler/*.jar
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop-2.3.0-cdh5.1.2/lib/native
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.23.2.el6.x86_64
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
15/03/11 18:19:04 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x4b8264fb, quorum=localhost:2181, baseZNode=/hbase
15/03/11 18:19:04 DEBUG zookeeper.ClientCnxn: zookeeper.disableAutoWatchReset is false
15/03/11 18:19:04 INFO zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4b8264fb connecting to ZooKeeper ensemble=localhost:2181
15/03/11 18:19:04 INFO zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
15/03/11 18:19:04 INFO zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session
15/03/11 18:19:04 DEBUG zookeeper.ClientCnxn: Session establishment request sent on localhost/127.0.0.1:2181
15/03/11 18:19:04 INFO zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x24bee862026002d, negotiated timeout = 40000
15/03/11 18:19:04 DEBUG zookeeper.ZooKeeperWatcher: hconnection-0x4b8264fb, quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
15/03/11 18:19:04 DEBUG zookeeper.ClientCnxn: Reading reply sessionid:0x24bee862026002d, packet:: clientPath:null serverPath:null finished:false header:: 1,3 replyHeader:: 1,4294978417,-101 request:: '/hbase/hbaseid,F response::
15/03/11 18:19:04 DEBUG zookeeper.ZooKeeperWatcher: hconnection-0x4b8264fb-0x24bee862026002d connected
15/03/11 18:19:04 INFO client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
.....................................................................................................................................................
最后发现两种解决方案
1(不推荐)
将该机器的 hosts里面的localhost 指向 真实 zk的ip地址(只能是一个临时方案)
vi /etc/hosts
--------------------------------------------------------------------------------------------------------
#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.29.1 l-hdfsgl1.bi.prod.cn1
10.1.29.1 localhost
-------------------------------------------------------------------------------------------------------
2(推荐)
需要在hive的conf hive-site.xml加入一个hbase zk的属性
------------------------------------------------------------------------------------------------------
<property>
<name>hbase.zookeeper.quorum</name>
<value>l-hdfsgl1.bi.prod.cn1,l-hdfsgl2.bi.prod.cn1,l-hdfscc1.bi.prod.cn1</value>
<description></description>
</property>
--------------------------------------------------------------------------------------------------------
退出hive后重新进入执行建表语句即可执行成功。
相关推荐
Hadoop Hive与Hbase整合配置
配置,测试,导入数据详细操作,CREATE TABLE hive_hbase_table(key int, value string,name string) hadoop jar /usr/lib/hbase/hbase-0.90.4-cdh3u3.jar importtsv -Dimporttsv.columns=HBASE_ROW_KEY, catgyname...
此文档是本人在工作中用到的知识总结出来的整合过程,本人是菜鸟,希望得到大神们的建议。
Hive与Hbase的整合,集中两者的优势,使用HiveQL语言,同时具备了实时性
Hadoop Hive HBase Spark Storm概念解释
hive与hbase整合经验谈
大数据工具篇之Hive与HBase整合完整教程
Hive与hbase的结构,Hive与hbase整合后的结构图 , Hive与hbase整合的原理
被编译的hive-hbase-handler-1.2.1.jar,用于在Hive中创建关联HBase表的jar,解决创建Hive关联HBase时报FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop....
hive和hbase整合的时候,如果出现不兼容的情况需要手动编译:hive-hbase-hander-1.2.2.jar把这个jar替换掉hive/lib里的那个jar包
hadoop hive hbase安装过程
HBase2.1.3整合Hive3.1.2,Hive官方的hive-hbase-handler-3.1.1.jar包不好用,自己编译后的,确认好用
HBase是建立在HDFS上的面上列的数据库。 由于HDFS不支持update操作,只支持delete和insert操作。所以,Hbase对表的操作也不支持update,同时也不支持delete, 只有一个insert的操作,所有的操作都是insert操作。当...
小牛学堂-大数据24期-04-Hadoop Hive Hbase Flume Sqoop-12天适合初学者.txt
大数据 HIVE HBASE 区别
hive和hbase的整合所需要的编译后的jar包。 注意:这里的hbase版本为:1.2.1 hive的版本为:1.2.1
该文档保护了目前比较流行的大数据平台的原理过程梳理。Hadoop,Hive,Hbase,Spark,MapReduce,Storm
jdk1.8.0_131、apache-zookeeper-3.8.0、hadoop-3.3.2、hbase-2.4.12 mysql5.7.38、mysql jdbc驱动mysql-connector-java-8.0.8-dmr-bin.jar、 apache-hive-3.1.3 2.本文软件均安装在自建的目录/export/server/下 ...