测试"/>
Phoenix 安装及测试
Apache Phoenix的优势:
-
1.the power of standard SQL(upsert) and JDBC APIs with full ACID transaction capabilities and
-
2.leveraging HBase as its backing store
-
3.Apache Phoenix is fully integrated with other Hadoop products such as Spark, Hive, Pig, Flume, and Map Reduce.
Apache Phoenix 的 部署
版本选择
-
直接使用官网提供的安装包 phoenix-4.14.0-cdh5.14.2,选择和自己CDH版本相近的也是可以的
-
自行编译增加一些新功能比如以下自行编译的
phoenix-4.14.1-cdh5.16.1+kerberos bug修复 和 dbeaver bug修复 -
生产上面比较稳定的版本是0.98和1.2.0版本
[hadoop@bigdata01 software]$ ll
total 1799020
# 下载压缩包
-rw-rw-r-- 1 hadoop hadoop 412461182 Jun 5 2018 apache-phoenix-4.14.0-cdh5.14.2-bin.tar.gz
# 解压压缩包
[hadoop@bigdata01 software]$ tar -xzvf apache-phoenix-4.14.0-cdh5.14.2-bin.tar.gz -C ../app/
[hadoop@bigdata01 software]$ cd ../app/
#建立软连接
[hadoop@bigdata01 app]$ ln -s apache-phoenix-4.14.0-cdh5.14.2-bin phoenix
进入到Phoenix安装目录
[hadoop@bigdata01 phoenix]$ ll
total 449012
drwxrwxr-x 4 hadoop hadoop 4096 Jun 5 2018 bin
drwxrwxr-x 3 hadoop hadoop 127 Jun 5 2018 examples
-rw-rw-r-- 1 hadoop hadoop 144163 Jun 5 2018 LICENSE
-rw-rw-r-- 1 hadoop hadoop 10509 Jun 5 2018 NOTICE
-rw-rw-r-- 1 hadoop hadoop 171018736 Jun 5 2018 phoenix-4.14.0-cdh5.14.2-client.jar 客户端 项目代码加载这个
-rw-rw-r-- 1 hadoop hadoop 77671902 Jun 5 2018 phoenix-4.14.0-cdh5.14.2-hive.jar
-rw-rw-r-- 1 hadoop hadoop 117815157 Jun 5 2018 phoenix-4.14.0-cdh5.14.2-pig.jar
-rw-rw-r-- 1 hadoop hadoop 7138744 Jun 5 2018 phoenix-4.14.0-cdh5.14.2-queryserver.jar
-rw-rw-r-- 1 hadoop hadoop 39216559 Jun 5 2018 phoenix-4.14.0-cdh5.14.2-server.jar 服务端 与hbase绑定
下面的部分是spark和phoenix的整合jar包
-rw-rw-r-- 1 hadoop hadoop 86578 Jun 5 2018 phoenix-spark-4.14.0-cdh5.14.2.jar
将phoenix的server包放到hbase的lib包下面
[hadoop@bigdata01 phoenix]$ cp phoenix-4.14.0-cdh5.14.2-server.jar ../hbase/lib/
在hbase中的配置文件中配置phoenix的信息,四个参数配置到hbase-site.xml中
[hadoop@bigdata01 app]$ cd hbase
[hadoop@bigdata01 hbase]$ cd conf/
[hadoop@bigdata01 conf]$ vi hbase-site.xml<property><name>hbase.table.sanity.checks</name><value>false</value>
</property>
<property><name>hbase.regionserver.wal.codec</name><value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>
<property><name>phoenix.schema.isNamespaceMappingEnabled</name><value>true</value>
</property>
<property><name>phoenix.schema.mapSystemTablesToNamespace</name><value>true</value>
</property>hbase是根据conf文件配置生效的
配置到hbase-site.xml重启hbase,生效以上的配置
[hadoop@bigdata01 conf]$ cd ../
[hadoop@bigdata01 hbase]$ bin/stop-hbase.sh 生效以上参数和jar包[hadoop@bigdata01 hbase]$ bin/start-hbase.shhadoop@bigdata01 hbase]$ jps 检查进入Phoenix的文件夹
[hadoop@bigdata01 hbase]$ cd ../phoenix
[hadoop@bigdata01 phoenix]$ cd bin
[hadoop@bigdata01 bin]$ ll
先删除phoenix中的hbase-site.xml,然后配置一个软连接到phoenix中[hadoop@bigdata01 bin]$ rm -f hbase-site.xml
[hadoop@bigdata01 bin]$ ln -s /home/hadoop/app/hbase/conf/hbase-site.xml hbase-site.xml
注意点:
假如我们 hdfs ha怎么办?
CDH版本的将小面的两个文件做软连接到phoenix的bin文件夹,hbase-site.xml中需要用到
cdh: /etc/hdfs/conf/core-site.xml
/etc/hdfs/conf/hdfs-site.xml
注意python版本 2.7,不要用其他的版本的python版本,下面的sqlline要求是python2.7
配置python2.7 进行执行
/xxx/xxx/python2.7 ./sqline.py bigdata01:2181
启动
[hadoop@bigdata01 bin]$ ./sqlline.py bigdata01:2181
..
出现下面的100表示启动成功133/133 (100%) Done
Done
sqlline version 1.2.0
0: jdbc:phoenix:bigdata01:2181>
数据类型
.html
MySQL | phoenix数据类型 |
---|---|
INT | INTEGER |
BIGINT | BIGINT |
FLOAT | FLOAT |
DOUBLE | DOUBLE |
DECIMAL | DECIMAL |
VARCHAR | VARCHAR |
CHAR | CHAR |
DATETIME | TIMESTAMP |
TIMESTAMP | TIMESTAMP |
CHAR | VARCHAR |
Mysql用vchar或者char的时候 到phoenix最好长度翻倍
语法
.html#select
CREATE SCHEMA 相当于我们的Database
CREATE TABLE
CREATE TABLE my_schema.my_table
( id BIGINT not null primary key, date Date);
联合主键
CREATE TABLE IF NOT EXISTS my_schema.my_table (
org_id CHAR(15), entity_id CHAR(15), payload binary(1000),
CONSTRAINT pk PRIMARY KEY (org_id, entity_id) );
phoenix建表 必须指定 pk----->对应着hbase rowkey
UPSET语法,除了upset语法之外,其支持标准的语法
UPSERT VALUES 将insert update融合一起,数据假如不存在就update,数据假如存在就insert
UPSERT INTO TEST VALUES('foo','bar',3);
UPSERT INTO TEST(NAME,ID) VALUES('foo',123);
ON DUPLICATE KEY UPDATE
MySQL有没有这个语法 去了解一下
.html
.html
案例
create schema bigdata; schema不区分大小写,默认转化成为大写
CREATE TABLE bigdata.test
( id BIGINT not null primary key, name varchar(255),age integer);upsert into bigdata.test values(1,'Jack',18);
upsert into bigdata.test values(2,'Tom',16);
upsert into bigdata.test values(2,'Tom',116);CREATE INDEX test_idx ON bigdata.test(name,age);0: jdbc:phoenix:bigdata01:2181> !tables
+------------+--------------+-------------+---------------+----------+------------+-----+
| TABLE_CAT | TABLE_SCHEM | TABLE_NAME | TABLE_TYPE | REMARKS | TYPE_NAME | SEL |
+------------+--------------+-------------+---------------+----------+------------+-----+
| | bigdata | TEST_IDX | INDEX | | | |
| | SYSTEM | CATALOG | SYSTEM TABLE | | | |
| | SYSTEM | FUNCTION | SYSTEM TABLE | | | |
| | SYSTEM | LOG | SYSTEM TABLE | | | |
| | SYSTEM | SEQUENCE | SYSTEM TABLE | | | |
| | SYSTEM | STATS | SYSTEM TABLE | | | |
| | bigdata | TEST | TABLE | | | |
+------------+--------------+-------------+---------------+----------+------------+-----+
0: jdbc:phoenix:bigdata01:2181> 0: jdbc:phoenix:bigdata01:2181> select * from bigdata.TEST;
+-----+-------+------+
| ID | NAME | AGE |
+-----+-------+------+
| 2 | Tom | 116 |
| 1 | Jack | 18 |
+-----+-------+------+
2 rows selected (0.04 seconds)
0: jdbc:phoenix:bigdata01:2181> 0: jdbc:phoenix:bigdata01:2181> select * from bigdata.TEST_IDX;
+---------+--------+------+
| 0:NAME | 0:AGE | :ID |
+---------+--------+------+
| Tom | 116 | 2 |
| Jack | 18 | 1 |
+---------+--------+------+
#这个使用的是Range 扫描
select age from bigdata.test where name='Jack';
RANGE SCAN OVER bigdata:TEST_IDX ['Jack']# 下面使用的是全表扫描
select name from bigdata.test where age=116;
FULL SCAN OVER bigdata:TEST_IDX
所以 生产上 如何优雅的 高命中率的使用 RANGE SCAN + Index table
schema–>namespace—>database
更多推荐
Phoenix 安装及测试
发布评论