性能测试流程框架

编程入门 行业动态 更新时间:2024-10-28 08:30:03

<a href=https://www.elefans.com/category/jswz/34/1767799.html style=性能测试流程框架"/>

性能测试流程框架

 

目录

一、任务受领阶段:

成果物:需求评审文档

二、测试规划阶段:

成果物:计划表、制作数据方案、制定接口测试方案(后续流程会随时更新调整)、服务关系调用图

三、测试准备阶段:

成果物:资源申请单、脚本开发进度表、数据准备的文件及备份脚本

四、测试执行与调优阶段:

成果物:场景统计单、问题记录单

五、报告发布与评审:

成果物:阶段性报告、会议邀请(报告解读、压测过程中遇到的问题或存在风险)

六、测试资产归档:

七:脚本说明


性能测试流程主要分几个阶段,任务受领阶段、测试规划阶段、测试准备阶段、测试执行与调优阶段、测试收尾阶段

一、任务受领阶段:

成果物:需求评审文档

参加需求评审,任务期间将各版本的评审文档落地归档,没文档也需发邮件将邮件归档

二、测试规划阶段:

成果物:计划表、制作数据方案、制定接口测试方案(后续流程会随时更新调整)、服务关系调用图

 

计划表2.1

制定数据方案—— 数据关联的依赖调研(详细图)2.2

制定数据方案—— 数据关联的依赖调研(精简图)2.3

制定接口测试方案——业务调用关系调研2.4

服务关系调用图——遇到复杂的业务调用时,需要梳理此图2.5

规划阶段需要像研发、运维等干系人员发送【所制定测试方案计划】邮件

邮件内容包含以上所列的成果物:计划表、数据方案、接口方案、服务调用关系图

三、测试准备阶段:

成果物:资源申请单、脚本开发进度表、数据准备的文件及备份脚本

环境搭建与调整——像运维同步研发所确认的资源申请单,申请资源,资源申请后测试人员需核对资源配置是否正确,并部署压测目录、shell脚本、nmon文件、ssh互信

此处shell脚本主要包含(automon.sh、automonbatch.sh后面会有脚本解释)

 

资源申请——所需服务资源与研发调研3.1

应用部署与连通性检查——资源配置有效,研发人员部署应用服务并连调环境,测试人员需同步配置信息如Zookeeper

脚本开发与增强——开发接口

脚本开发进度表3.2

批量数据准备——复杂场景需要先预埋数据,shell做好干系库的数据库备份及还原脚本

1、做数据一般通过存储过程或者批量跑接口完成

2、做数据库备份:可以重复利用做的数据或者可以直接还原指定场景所用

备份:mysqldump -h主机名 -P端口 -u用户名 -p密码 --database 数据库名 > 文件名.sql

还原:mysql -uroot -psymdata xinda_product < ${backupfile}

四、测试执行与调优阶段:

成果物:场景统计单、问题记录单

基准测试——一般基准并发点在100tps,同时查看服务日志及资源有没有异样情况

单交易负载测试与调优:

混合负载测试与调优:多场景交叉混合执行(一般用LR工具更方便于设计并执行复杂场景)

成果物:场景执行统计单、问题记录单

场景统计单4.1

问题记录单4.2

五、报告发布与评审:

成果物:阶段性报告、会议邀请(报告解读、压测过程中遇到的问题或存在风险)

阶段性报告主要包含

1、约束:人员/时间上的受限、与线上资源/配置等信息不同步的情况说明、某些场景无法复现(如线上数据无法模拟)

2、监控日志:压测过程中存放的日志路径及错误信息手机的信息,方便开发自行获取

3、业务指标:是上面场景统计单的汇总 如图4.1

4、问题清单:如果是阶段性报告需要实时更新表单 见图4.2

5、系统资源监控:如遇到高位运行的服务,执行的场景关联服务资源过高的各指标展示(如cpu、io读写、磁盘读写等)

6、抗风险预估:一般结合约束关联性阐述

会议邀请

六、测试资产归档:

将以上五大点的成果物落地归档jira

 

七:脚本说明

#定义nmon主目录
monbase=/server/nmondir
#监控间隔时间
interval=$1
#监控总次数
sum=$2
#定义监控启动时间
monstart_time=`date +%Y%m%d%H%M%S`#远程服务器ip定义
rip=10.1.15.242#本地服务器ip获取
if [[ `LC_ALL=C ifconfig | grep 'Bcast'  |cut -d: -f2 | awk '{ print $1}' |cut -d'.' -f1,2,3` = "10.100.24" ]] || [[ `LC_ALL=C ifconfig | grep 'Bcast'  |cut -d: -f2 | awk '{ print $1}' |cut -d'.' -f1,2,3` = "10.103.51" ]];then
lip=`LC_ALL=C ifconfig | grep 'Bcast'  |cut -d: -f2 | awk '{ print $1}'`else
#10.100.24.85~88服务器操作系统为centos7.0,获取lip方法如下
lip=`LC_ALL=C ifconfig |grep -A 1 'eth0:' |awk 'NR==2{print}' | awk '{ print $2}'`fi#服务名称变量定义
servicename=`hostname | cut -d '.' -f 1`#定义监控文件的远程存储服务器
rip=10.1.15.242#移动历史监控文件至nmonhis目录
echo "移动历史监控文件至nmonhis目录"
mv /server/tomcat/logs/catalina.out.* ${monbase}/nmonhis;
cd ${monbase}
#mv *.nmon *.conf *.log ${monbase}/nmonhis;ls -lrt ${monbase}/nmonhis;ls -lrt
mv *.nmon *.conf *.log ${monbase}/nmonhis;#可选监控项目通过替换"[ $? -eq 0 ]"为"[ $? -ne 0 ]"可屏蔽该监控项目#开启dubbo日志监控(可选监控项目)
##定义dubbo日志
dubbolog=/home/tomcat/dubbo-governance.logps -ef |grep dubbo |grep -v grep
if [ $? -eq 0 ] && [ -f "${dubbolog}" ];thenecho "${lip}开启dubbo日志监控:${lip}.dubbo.${monstart_time}.log"tail -f ${dubbolog} > ${monbase}/${lip}.dubbo.${monstart_time}.log &sleep 2ps -ef |grep tail
elsesleep 1
fi#开启nginx日志监控
##定义nginx日志
nginxlog=/home/devloper/work/nginx/logs/error.logps -ef |grep nginx |grep -v grep
if [ $? -eq 0 ] && [ -f "${nginxlog}" ];thenecho "${lip}开启nginx日志监控:${lip}.nginxerror.${monstart_time}.log"tail -f ${nginxlog} > ${monbase}/${lip}.nginxerror.${monstart_time}.log &
fi#开启haproxy日志监控(可选监控项目)
##定义haproxy日志
halog=/var/log/haproxy.log
ps -ef |grep haproxy |grep -v grep
if [ $? -eq 0 ] && [ -f "${halog}" ]; thenecho "${lip}开启haproxy日志监控:${lip}.haproxy.${monstart_time}.log"tail -f ${halog} > ${monbase}/${lip}.haproxy.${monstart_time}.log &sleep 2ps -ef |grep tail
elsesleep 1
fi#开启tomcat日志监控
##定义tomcat日志
tomcatlog=/server/tomcat/logs/catalina.out
ps -ef|grep tomcat |grep -v grepif [ $? -eq 0 ] && [ -f "${tomcatlog}" ];thenecho "${lip}开启tomcat日志监控:${lip}.tomcat.${monstart_time}.log"tail -f ${tomcatlog} > ${monbase}/${lip}.tomcat.${monstart_time}.log &sleep 2ps -ef |grep tail                elsesleep 1fi#开启trident日志监控
##定义sidekiq日志
#sidekiqlog=/home/devloper/work/trident/log/sidekiq.log 
sidekiqlog=/data/log/rails/sidekiq.log
ps -ef |grep sidekiq |grep -v grep
if [ $? -eq 0 ] && [ -f "${sidekiqlog}" ];thenecho "${lip}开启日志监控:${lip}.sidekiq.${monstart_time}.log"tail -f ${sidekiqlog} > ${monbase}/${lip}.sidekiq.${monstart_time}.log &sleep 2ps -ef |grep tail
else        sleep 1
fi#开启redis日志监控(可选监控项目)
##定义redis日志
redislog=/server/redis/logs/redis.log
ps -fe|grep redis |grep -v grep
if [ $? -eq 0 ] && [ -f "${redislog}" ]
thenecho "${lip}开启redis日志监控:${lip}.redis.${monstart_time}.log"tail -f ${redislog} >${monbase}/${lip}.redis.${monstart_time}.log &sleep 2ps -ef |grep tail
else	sleep 1
fi#开启rabbitmq日志监控(可选监控项目)
##定义rabbitmq日志
mqlog=/var/tmp/rabbitmq-tracing/RabbitMQ_Tracing.logps -fe|grep rabbitmq |grep -v grep
if [ $? -eq 0 ] && [ -f "${mqlog}" ];thenecho "${lip}开启rabbitmq日志监控:${lip}.rabbitmq.${monstart_time}.log"	tail -f ${mqlog} > ${lip}.rabbitmq.${monstart_time}.log &sleep 2ps -ef |grep tail
elsesleep 1
fi#开启mongo日志监控(可选监控项目)
##定义mongo日志
mongolog=/server/mongodb/logs/mongodb.log
ps -fe|grep mongod |grep -v grep
if [ $? -eq 0 ] && [ -f "${mongolog}" ]thenecho "${lip}开启mongo日志监控:${lip}.mongo.${monstart_time}.log"tail -f ${mongolog} > ${monbase}/${lip}.mongo.${monstart_time}.log &sleep 2ps -ef |grep tail
elsesleep 1
fi##定义mysql日志
mysqlerr=/server/mysql/log/mysql.err.log
mysqlslow=/server/mysql/log/mysql.slow.log
mysqlerr_Trident=/server/mysql_data/mysql.err.log
mysqlslow_Trident=/server/mysql_data/mysql.slow.logps -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlslow}" ]thenecho "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"tail -f ${mysqlslow} > ${monbase}/${lip}.mysqlslow.${monstart_time}.log &	  sleep 2ps -ef |grep tail
else	sleep 1  
fips -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlslow_Trident}" ] thenecho "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"tail -f ${mysqlslow_Trident} > ${monbase}/${lip}.mysqlslow.${monstart_time}.log &                   sleep 2ps -ef |grep tail
else    sleep 1
fips -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlerr}" ] thenecho "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"tail -f ${mysqlerr} > ${monbase}/${lip}.mysqlerr.${monstart_time}.log &                    sleep 2ps -ef |grep tail
else    sleep 1
fips -fe|grep mysqld |grep -v grep
if [ $? -eq 0 ] && [ -f "${mysqlerr_Trident}" ] thenecho "${lip}开启mysql日志监控:${lip}.mysql.${monstart_time}.log"tail -f ${mysqlerr_Trident} > ${monbase}/${lip}.mysqlerr.${monstart_time}.log &  sleep 2ps -ef |grep tail
else    sleep 1
fi#开启vmstat监控
#echo "${lip}开启vmstat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
#df -hl >${lip}.vmstat.${monstart_time}.conf;sleep 1;vmstat ${interval} ${sum} >>${lip}.vmstat.${monstart_time}.conf &
#sleep 1#开启iostat监控
#echo "${lip}开启iostat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
#iostat -x -k -d ${interval} ${sum} >>${lip}.iosat.${monstart_time}.conf &
#sleep 1#开启vnstat监控
#echo "${lip}开启vnstat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
#vnstat  -l -i eth0  >>${lip}.vnsat.${monstart_time}.conf &
#sleep 2#开启dstat监控
echo "${lip}开启dstat监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
dstat -tlcmsgnrp ${interval} ${sum} >>${lip}.dstat.${monstart_time}.dstat &#开启nmon监控
echo "${lip}开启nmon监控,监控启动时间:${monstart_time},监控间隔:${interval}秒,监控次数:${sum}次"
${monbase}/nmon -F ${lip}.nmon.${monstart_time}.nmon -t -s $interval -c $sum & 
sleep 2
ps -ef |grep nmon
sleep 1let endtime=$interval*$sum+5
echo "${lip}监控进程已启动,监控场景将在${endtime}秒后结束,请耐心等待!"
sleep ${endtime}#开始收集tomcat错误日志信息
if [ -f "${monbase}/${lip}.tomcat.${monstart_time}.log" ]; then
grep -A 30 'Exception\|ERROR\|Fail\|失败' ${monbase}/${lip}.tomcat.${monstart_time}.log  > ${lip}.tomcat.${monstart_time}.error.log
fifilename=${lip}.tomcat.${monstart_time}.error.log
filesize=`ls -l $filename | awk '{ print $5 }'`
minsize=0
if [ $filesize -eq $minsize ] ;then
rm -rf ${lip}.tomcat.${monstart_time}.error.logfisleep 5echo "${lip}监控已结束,删除后台未结束的监控进程"
ps -ef | grep tail | grep -v grep | awk '{print $2}' | xargs kill -9 &
ps -ef |grep "/server/nmondir/nmon" | grep -v grep | awk '{print $2}' | xargs kill -9 &
#ps -ef | grep vmstat | grep -v grep | awk '{print $2}' | xargs kill -9 &
#ps -ef | grep iostat | grep -v grep | awk '{print $2}' | xargs kill -9 &
#ps -ef | grep vnstat | grep -v grep | awk '{print $2}' | xargs kill -9 &
ps -ef | grep dstat | grep -v grep | awk '{print $2}' | xargs kill -9 &sleep 3 
echo "${lip}监控任务至此结束"#开始传递监控文件到集中控制台
#scp ${monbase}/*.log *.conf *.nmon ${rip}:${monbase} &
#scp $lip.tomcat.${monstart_time}.log ${rip}:${monbase} &
#echo "$lip传递监控文件到集中控制台结束"
sleep 5exit 0

命令演示:automon.sh 30 100 (30: #监控间隔时间 100:#监控总次数)

其实就是执行nmon的命令在脚本里有做了一次封装

脚本解读:

1、开启各服务及中间件的日志监控,如(tomcat、dubbo、mq、nginx、redis、mongo、mysqlslow、mysqlerr)

2、启动nmon

3、监控sleep到时间了,执行错误过滤日志

4、杀tail进程

 

#定义监控环境,监控频率,监控总次数
#定义要监控的环境
echo -e "请输入监控频率(数值,>=2秒),监控总次数(数值),监控环境(t/api),变量间用空格分开\n监控示例:2 5 api"
read interval sum envname
if [ ${interval} -lt 2 ];then
echo "输入的监控频率为${interval}<2秒,不合规"
exit 0
fi#监控主目录
monbase=/server/nmondir#tomcat主目录
tomcatbase=/server/tomcat/logs#ip名称变量定义
ip=`LC_ALL=C ifconfig  | grep 'inet addr:'| grep -v '127.0.0.1' |cut -d: -f2 | awk '{ print $1}'`#移动历史监控文件至nmonhis目录
echo "移动历史监控文件至nmonhis目录"
cd ${monbase}
mv *.nmon *.conf *.log ${monbase}/nmonhis;ls -lrt ${monbase}/nmonhis;ls -lrtif [ "${envname}" = "api" ]; then switch="api"elif [ "${envname}" = "t" ]; then switch="t"else switch="*"ficase $switch in
api)
echo "连接远程${envname}环境被监控服务器并开启监控脚本"Host_List="     10.100.24.1010.100.24.1110.100.24.1210.100.24.1310.100.24.13310.100.24.1410.100.24.1510.100.24.1610.100.24.1710.100.24.1810.100.24.1910.100.24.7310.100.24.7410.100.24.7510.100.24.7610.100.24.7710.100.24.7810.100.24.7910.100.24.810.100.24.8010.100.24.8110.100.24.8210.100.24.8310.100.24.8410.100.24.8510.100.24.8610.100.24.8710.100.24.8810.100.24.8910.100.24.910.103.51.6310.103.51.71"
for Host in $Host_List
do
ssh root@$Host ${monbase}/servermon.sh ${interval} ${sum}&
ssh root@$Host ${monbase}/automon.sh ${interval} ${sum}&
done
;;t)
echo "连接远程${envname}环境被监控服务器并开启监控脚本"Host_List="             10.103.51.10110.103.51.10210.103.51.13710.103.51.13810.103.51.13910.103.51.15610.103.51.16410.103.51.17410.103.51.21710.103.51.22110.103.51.22210.103.51.22510.103.51.22910.103.51.23010.103.51.23110.103.51.23210.103.51.23310.103.51.23410.103.51.23510.103.51.23710.103.51.2610.103.51.4310.103.51.5210.103.51.5410.103.51.5510.103.51.5610.103.51.5710.103.51.5810.103.51.5910.103.51.6010.103.51.6110.103.51.6210.103.51.6310.103.51.6410.103.51.6510.103.51.6610.103.51.6710.103.51.6810.103.51.6910.103.51.7010.103.51.7110.103.51.7210.103.51.7310.103.51.7410.103.51.7510.103.51.7610.103.51.7710.103.51.7910.103.51.8010.103.51.8810.103.51.8910.103.51.9010.103.51.9110.103.51.9210.103.51.9310.103.51.9410.103.51.9510.103.51.9610.103.51.9710.103.51.98"for Host in $Host_List
do
ssh $Host ${monbase}/servermon.sh ${interval} ${sum}&
ssh $Host ${monbase}/automon.sh ${interval} ${sum}&
done
;;*)
echo "no switch can be matched!"
;;esac#done #多个传参的结束标识#wait
echo "批量监控已结束"exit 0

命令演示:automon.sh 30 100 test (30: #监控间隔时间 100:#监控总次数 test:切换环境)

脚本解读:将不同环境的执行服务列表遍历并远程执行automon.sh命令

 

#监控主目录
monbase=/server/nmondir
#定义监控启动时间
monstart_time=`date +%Y%m%d%H%M%S`
#ip名称变量定义
ip=`LC_ALL=C ifconfig  | grep 'inet addr:'| grep -v '127.0.0.1' |cut -d: -f2 | awk '{ print $1}'`
#定义远程被监控服务器列表
#Hostlist=="192.168.206.88"
#定义远程存放文件的目录
dst=/server/nmondir#echo -e "请输入批量操作的文件或模糊文件变量,多个变量用空格分开\n文件示例:nmon automon.sh servermon.sh\n>模糊文件示例:*.log *.nmon"
#read scpfiename#动态传入多个被拷贝的文件名
count=1
while [ "$#" -ge "1" ];doscpfilename=$1echo "文件序号$count的文件名为:$1"let count=count+1shiftif [ "${scpfilename}" = "nmon" ] || [ "${scpfilename}" = "automon.sh" ] || [ "${scpfilename}" = "servermon.sh" ] || [ "${scpfilename}" = "rm.sh" ] ; then switch="1"elif [ "${scpfilename}" = "*.log" ] || [ "${scpfilename}" = "*.nmon" ] || [ "${scpfilename}" = "*.conf" ] || [ "${scpfilename}" = "*.timelog" ] || [ "${scpfilename}" = "*servercollect*.log" ] || [ "${scpfilename}" = "*servermon*.log" ] || [ "${scpfilename}" = "*.dstat" ] || [ "${scpfilename}" = "*.servercollect" ]  ; then switch="2"else switch="*"ficase $switch in
1)
echo "批量拷贝本地${scpfilename}到远程被监控服务器"Host_List="             10.103.51.10110.103.51.10210.103.51.13710.103.51.13810.103.51.13910.103.51.15610.103.51.16410.103.51.17410.103.51.21710.103.51.22110.103.51.22210.103.51.22510.103.51.22910.103.51.23010.103.51.23110.103.51.23210.103.51.23310.103.51.23410.103.51.23510.103.51.23710.103.51.2610.103.51.4310.103.51.5210.103.51.5410.103.51.5510.103.51.5610.103.51.5710.103.51.5810.103.51.5910.103.51.6010.103.51.6110.103.51.6210.103.51.6310.103.51.6410.103.51.6510.103.51.6610.103.51.6710.103.51.6810.103.51.6910.103.51.7010.103.51.7110.103.51.7210.103.51.7310.103.51.7410.103.51.7510.103.51.7610.103.51.7710.103.51.7910.103.51.8010.103.51.8810.103.51.8910.103.51.9010.103.51.9110.103.51.9210.103.51.9310.103.51.9410.103.51.9510.103.51.9610.103.51.9710.103.51.98"for Host in $Host_List
do 						
scp -o GSSAPIAuthentication=no ${monbase}/${scpfilename} $Host:${dst}
doneecho "批量拷贝本地${scpfilename}到远程被监控服务器"Host_List="    	10.100.24.1010.100.24.1110.100.24.1210.100.24.1310.100.24.13310.100.24.1410.100.24.1510.100.24.1610.100.24.1710.100.24.1810.100.24.1910.100.24.7310.100.24.7410.100.24.7510.100.24.7610.100.24.7710.100.24.7810.100.24.7910.100.24.810.100.24.8010.100.24.8110.100.24.8210.100.24.8310.100.24.8410.100.24.8510.100.24.8610.100.24.8710.100.24.8810.100.24.8910.100.24.910.103.51.6310.103.51.71"for Host in $Host_List
do
scp -o GSSAPIAuthentication=no ${monbase}/${scpfilename} $Host:${dst}
done
;;2)
#定义要监控的环境
echo "输入要操作的环境英文简称:t(功能测试环境),api(api测试环境)"
read envnameif [ "${envname}" = "api" ]; then
echo "批量拷贝本地${envname}环境${scpfilename}到远程被监控服务器"Host_List="     10.100.24.1010.100.24.1110.100.24.1210.100.24.1310.100.24.13310.100.24.1410.100.24.1510.100.24.1610.100.24.1710.100.24.1810.100.24.1910.100.24.7310.100.24.7410.100.24.7510.100.24.7610.100.24.7710.100.24.7810.100.24.7910.100.24.810.100.24.8010.100.24.8110.100.24.8210.100.24.8310.100.24.8410.100.24.8510.100.24.8610.100.24.8710.100.24.8810.100.24.8910.100.24.910.103.51.6310.103.51.71"for Host in $Host_List
do
scp -o GSSAPIAuthentication=no $Host:${monbase}/${scpfilename} ${monbase}
doneelif [ "${envname}" = "t" ];then
echo "批量拷贝本地${envname}环境${scpfilename}到远程被监控服务器"Host_List="             10.103.51.10110.103.51.10210.103.51.13710.103.51.13810.103.51.13910.103.51.15610.103.51.16410.103.51.17410.103.51.21710.103.51.22110.103.51.22210.103.51.22510.103.51.22910.103.51.23010.103.51.23110.103.51.23210.103.51.23310.103.51.23410.103.51.23510.103.51.23710.103.51.2610.103.51.4310.103.51.5210.103.51.5410.103.51.5510.103.51.5610.103.51.5710.103.51.5810.103.51.5910.103.51.6010.103.51.6110.103.51.6210.103.51.6310.103.51.6410.103.51.6510.103.51.6610.103.51.6710.103.51.6810.103.51.6910.103.51.7010.103.51.7110.103.51.7210.103.51.7310.103.51.7410.103.51.7510.103.51.7610.103.51.7710.103.51.7910.103.51.8010.103.51.8810.103.51.8910.103.51.9010.103.51.9110.103.51.9210.103.51.9310.103.51.9410.103.51.9510.103.51.9610.103.51.9710.103.51.98"			for Host in $Host_List
do 						
scp -o GSSAPIAuthentication=no  $Host:${monbase}/${scpfilename} ${monbase}
doneelse "other situation"fi
;;*)
echo "no switch can be matched!"
;;esacdone #多个传参的结束标识echo "批量拷贝已结束"if [ $switch = "2"  ];then
./getservercollect.sh ${envname}
fiexit 0

脚本解读:将监控服务器下部署的监控信息,批量拷贝到远程机(批量到一个机器上,方便汇总监控资料)

 

#定义数据库连接参数
dbhost=10.100.24.15
dbuser=root
dbpwd=C8dM1B9wd1iQC7Y
dbname="ApolloConfigDB ApolloPortalDB bangbang_manage ersdata juanpi_manage mysql scdata test wowo_manage xiaocheng_manage xiaodai xiaodai_black xiaodai_manage xiaodai_market_manage xiaodai_portal xiaodai_r360 xiaodai_third xiaoxiaodai xxl-job xxl-job-182 yinbin-xiaodai-backup0622 yinbintest"
#定义开始时间戳
startime=`date +%Y%m%d%H%M%S`
#定义还原文件存放路径
filepath=/server/nmondirecho -n "请指定还原所需sql文件('backupfile'):"
read backupfile
echo "所选还原文件为:$backupfile"if [  `echo $backupfile | grep -e fullbackup`  ];then switch=1
elif [  `echo $backupfile | grep -e partialbackup`  ];then switch=2
else echo "所选还原文件不合规,请确认还原文件后再来运行该程序";exit 0
ficase $switch in
1)
echo "开始从${backupfile}进行全量数据还原,请耐心等待\"还原已结束\"的提示字样出现"
sleep 2;
#echo "全量还原开始运行的临时测试信息"
#mysql -u${dbuser}  -p${dbpwd} ${dbname} <${backupfile}
mysql -u${dbuser}  -p${dbpwd} <${backupfile}
;;2)
currentbaklist="ast_current_accountast_interest_invest_reocrdast_manually_lendingast_maturity_platform_user_redeemast_out_configast_to_matchast_to_match_detailast_user_accountast_user_amount_match_queueast_user_in_out_matchingast_user_out_applyast_user_out_interestast_user_out_matched_detailast_warehousingcurent_product_money_recordcurrent_ast_maturity_platform_user_redeemcurrent_ast_user_out_applycurrent_productcurrent_product_calendarcurrent_product_contractcurrent_product_desccurrent_product_history_ratecurrent_product_pay_recordcurrent_product_rate_infocurrent_product_statisticscurrent_user_accountcurrent_user_income_invest_recordcurrent_user_interest_recordcurrent_user_invest_recordcurrent_user_pay_recordcurrent_user_redeem_recordmoney_recorduser_accountloanloan_assetloan_baseloan_phasebank_card"cat /dev/null>${filepath}/table.CREATE.list
cat /dev/null>${filepath}/table.CREATE.txtfilterlist="CREATE"
for filter in $filterlist;
do grep $filter ${filepath}/${backupfile}>${filepath}/table.${filter}.listwhile read line;doecho $line|awk '{print $3}' >>${filepath}/table.${filter}.txtdone<${filepath}/table.${filter}.list
doneecho "列出将被还原的数据列表内容:"
cat ${filepath}/table.${filter}.txt
#echo "列出将被还原的数据列表:${currentbaklist}"
sleep 2;echo -n "('judge'),确认还原输入1,我要放弃还原输入2:"
read judge
echo "您的输入为:$judge"if [ $judge  !=  "1" ];then echo "您选择放弃还原,请确认还原后重新运行该程序 ";exit 0
fiecho "开始从${backupfile}进行指定表的数据还原,请耐心等待\"还原已结束\"的提示字样出现"
sleep 2;
#echo "部分还原开始运行的临时测试信息"
mysql -uroot -psymdata xinda_product < ${backupfile}
;;*)
echo "不符合规则,请阅读规则后再运行该程序";exit 0
;;esac#定义结束时间戳
endtime=`date +%Y%m%d%H%M%S`
echo "还原已结束,还原开始时间:${startime},还原结束时间:${endtime}";sleep 2;ls -lrt;exit 0

脚本解读:用于还原数据场景或重复执行准备的数据所使用

 

#监控主目录
monbase=/server/nmondir
#tomcat主目录
tomcatbase=/server/tomcat/logs
#tomcat启停服务目录
tomcatrestart=/server/tomcat/bin
#ip名称变量定义
lip=`LC_ALL=C ifconfig  | grep 'inet addr:'| grep -v '127.0.0.1' |cut -d: -f2 | awk '{ print $1}'`#定义替换文件位置
filedir1=/server/tomcat/webapps/ROOT/WEB-INF/classes/spring/
#定义替换文件
file1=springmvc.xml#执行文件替换
echo "${lip} mv ${file1} ${file1}.bak"
cd ${filedir1}/;ls -lrt;sleep 1;mv ${file1} ${file1}.bak;ls -lrt;sleep 1;echo "${lip} cp ${monbase}/${file1} ${filedir1}"
cp ${monbase}/${file1} ${filedir1};ls -lrt;sleep 1;
chmod 664 ${filedir1}/${file1}echo "重启tomcat服务"
ps -fe|grep tomcat |grep -v grep
if [ $? -eq 0 ]thenecho "${lip} tomcat process is still running,need to restart"cd /etc/init.d;./tomcat restart;sleep 2;echo "${lip} output the tomcat running status"#tail -f /server/tomcat/logs/catalina.out |grep "Server startup in"elseecho "tomcat are not running,need to start"cd /etc/init.d;./tomcat start;sleep 1;echo "${lip} output the tomcat running status";#tail -f /server/tomcat/logs/catalina.out |grep "Server startup in"fiexit 0#ps -fe|grep tomcat |grep -v grep
#if [ $? -eq 0 ]
#  then
#  echo "${lip} tomcat process is still running,need to kill"
#  ps -ef | grep tomcat | grep -v grep | awk '{print $2}' | xargs kill -9;sleep 3
#  echo "${lip} output tomcat process after killing "
#  ps -ef | grep tomcat;sleep 2
#  echo "${lip} stop process end"
#  sh ${tomcatrestart}/startup.sh
#  echo "${lip} output tomcat process after killing and running startup.sh"
#  sleep 1;echo "ps -ef |grep tomcat"
#  ps -ef |grep tomcat
#  sleep 2;echo "${lip} output the tomcat running status"#else
#  echo "tomcat are not running,need to start"
#  sh ${tomcatrestart}/startup.sh;
#  echo "${lip} output tomcat process after running startup.sh";
#  sleep 2;echo "ps -ef |grep tomcat";
#  ps -ef |grep tomcat;
#  sleep 1;echo "${lip} output the tomcat running status";#fi
#exit 0

脚本解读:运行压测前自动替换java代码或者配置文件,如替换万能验证码代码或者xml等配置文件

好处:开发可以配合压测修改指定的代码,并且不会出现将配合改的代码误发到线上的风险

 

 

更多推荐

性能测试流程框架

本文发布于:2024-02-10 13:57:59,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1675778.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:性能测试   框架   流程

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!